Started by timer Running as SYSTEM [EnvInject] - Loading node environment variables. Building remotely on prd-ubuntu1804-docker-8c-8g-3642 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-pap-newdelhi-project-csit-pap [ssh-agent] Looking for ssh-agent implementation... [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) $ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-Dx3Yz4DG3wTJ/agent.2130 SSH_AGENT_PID=2132 [ssh-agent] Started. Running ssh-add (command line suppressed) Identity added: /w/workspace/policy-pap-newdelhi-project-csit-pap@tmp/private_key_2287659558898128674.key (/w/workspace/policy-pap-newdelhi-project-csit-pap@tmp/private_key_2287659558898128674.key) [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) The recommended git tool is: NONE using credential onap-jenkins-ssh Wiping out workspace first. Cloning the remote Git repository Cloning repository git://cloud.onap.org/mirror/policy/docker.git > git init /w/workspace/policy-pap-newdelhi-project-csit-pap # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git > git --version # timeout=10 > git --version # 'git version 2.17.1' using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 Avoid second fetch > git rev-parse refs/remotes/origin/newdelhi^{commit} # timeout=10 Checking out Revision 8bc6d4e4bbc319e1319b18c68aba7a6e0a7dc89d (refs/remotes/origin/newdelhi) > git config core.sparsecheckout # timeout=10 > git checkout -f 8bc6d4e4bbc319e1319b18c68aba7a6e0a7dc89d # timeout=30 Commit message: "update references for newdelhi branch" > git rev-list --no-walk 8bc6d4e4bbc319e1319b18c68aba7a6e0a7dc89d # timeout=10 provisioning config files... copy managed file [npmrc] to file:/home/jenkins/.npmrc copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins6621610968379972312.sh ---> python-tools-install.sh Setup pyenv: * system (set by /opt/pyenv/version) * 3.8.13 (set by /opt/pyenv/version) * 3.9.13 (set by /opt/pyenv/version) * 3.10.6 (set by /opt/pyenv/version) lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-PZ3T lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-PZ3T/bin to PATH Generating Requirements File Python 3.10.6 pip 24.0 from /tmp/venv-PZ3T/lib/python3.10/site-packages/pip (python 3.10) appdirs==1.4.4 argcomplete==3.3.0 aspy.yaml==1.3.0 attrs==23.2.0 autopage==0.5.2 beautifulsoup4==4.12.3 boto3==1.34.111 botocore==1.34.111 bs4==0.0.2 cachetools==5.3.3 certifi==2024.2.2 cffi==1.16.0 cfgv==3.4.0 chardet==5.2.0 charset-normalizer==3.3.2 click==8.1.7 cliff==4.7.0 cmd2==2.4.3 cryptography==3.3.2 debtcollector==3.0.0 decorator==5.1.1 defusedxml==0.7.1 Deprecated==1.2.14 distlib==0.3.8 dnspython==2.6.1 docker==4.2.2 dogpile.cache==1.3.3 email_validator==2.1.1 filelock==3.14.0 future==1.0.0 gitdb==4.0.11 GitPython==3.1.43 google-auth==2.29.0 httplib2==0.22.0 identify==2.5.36 idna==3.7 importlib-resources==1.5.0 iso8601==2.1.0 Jinja2==3.1.4 jmespath==1.0.1 jsonpatch==1.33 jsonpointer==2.4 jsonschema==4.22.0 jsonschema-specifications==2023.12.1 keystoneauth1==5.6.0 kubernetes==29.0.0 lftools==0.37.10 lxml==5.2.2 MarkupSafe==2.1.5 msgpack==1.0.8 multi_key_dict==2.0.3 munch==4.0.0 netaddr==1.2.1 netifaces==0.11.0 niet==1.4.2 nodeenv==1.8.0 oauth2client==4.1.3 oauthlib==3.2.2 openstacksdk==3.1.0 os-client-config==2.1.0 os-service-types==1.7.0 osc-lib==3.0.1 oslo.config==9.4.0 oslo.context==5.5.0 oslo.i18n==6.3.0 oslo.log==6.0.0 oslo.serialization==5.4.0 oslo.utils==7.1.0 packaging==24.0 pbr==6.0.0 platformdirs==4.2.2 prettytable==3.10.0 pyasn1==0.6.0 pyasn1_modules==0.4.0 pycparser==2.22 pygerrit2==2.0.15 PyGithub==2.3.0 PyJWT==2.8.0 PyNaCl==1.5.0 pyparsing==2.4.7 pyperclip==1.8.2 pyrsistent==0.20.0 python-cinderclient==9.5.0 python-dateutil==2.9.0.post0 python-heatclient==3.5.0 python-jenkins==1.8.2 python-keystoneclient==5.4.0 python-magnumclient==4.5.0 python-novaclient==18.6.0 python-openstackclient==6.6.0 python-swiftclient==4.6.0 PyYAML==6.0.1 referencing==0.35.1 requests==2.32.2 requests-oauthlib==2.0.0 requestsexceptions==1.4.0 rfc3986==2.0.0 rpds-py==0.18.1 rsa==4.9 ruamel.yaml==0.18.6 ruamel.yaml.clib==0.2.8 s3transfer==0.10.1 simplejson==3.19.2 six==1.16.0 smmap==5.0.1 soupsieve==2.5 stevedore==5.2.0 tabulate==0.9.0 toml==0.10.2 tomlkit==0.12.5 tqdm==4.66.4 typing_extensions==4.11.0 tzdata==2024.1 urllib3==1.26.18 virtualenv==20.26.2 wcwidth==0.2.13 websocket-client==1.8.0 wrapt==1.16.0 xdg==6.0.0 xmltodict==0.13.0 yq==3.4.3 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SET_JDK_VERSION=openjdk17 GIT_URL="git://cloud.onap.org/mirror" [EnvInject] - Variables injected successfully. [policy-pap-newdelhi-project-csit-pap] $ /bin/sh /tmp/jenkins16211138815882487234.sh ---> update-java-alternatives.sh ---> Updating Java version ---> Ubuntu/Debian system detected update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode openjdk version "17.0.4" 2022-07-19 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' [EnvInject] - Variables injected successfully. [policy-pap-newdelhi-project-csit-pap] $ /bin/sh -xe /tmp/jenkins6138686898223583624.sh + /w/workspace/policy-pap-newdelhi-project-csit-pap/csit/run-project-csit.sh pap WARNING! Using --password via the CLI is insecure. Use --password-stdin. WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded docker: 'compose' is not a docker command. See 'docker --help' Docker Compose Plugin not installed. Installing now... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 55 60.0M 55 33.1M 0 0 80.0M 0 --:--:-- --:--:-- --:--:-- 80.0M 100 60.0M 100 60.0M 0 0 107M 0 --:--:-- --:--:-- --:--:-- 187M Setting project configuration for: pap Configuring docker compose... Starting apex-pdp application with Grafana pap Pulling simulator Pulling apex-pdp Pulling mariadb Pulling zookeeper Pulling kafka Pulling prometheus Pulling grafana Pulling policy-db-migrator Pulling api Pulling 31e352740f53 Pulling fs layer 84b15477ea97 Pulling fs layer e4a126ec7ec2 Pulling fs layer 7e6a4282241c Pulling fs layer 4c1f466ebe00 Pulling fs layer 9691625c9fe6 Pulling fs layer a4d16507c3db Pulling fs layer 7e6a4282241c Waiting 4c1f466ebe00 Waiting 9691625c9fe6 Waiting a4d16507c3db Waiting 31e352740f53 Pulling fs layer 84b15477ea97 Pulling fs layer 773cf06e34cf Pulling fs layer baa7adcb99b8 Pulling fs layer eda20f6c55e6 Pulling fs layer 699508c11178 Pulling fs layer 773cf06e34cf Waiting baa7adcb99b8 Waiting eda20f6c55e6 Waiting 699508c11178 Waiting 31e352740f53 Pulling fs layer 84b15477ea97 Pulling fs layer 181573c3afd9 Pulling fs layer 5828e4fe52be Pulling fs layer fba3b8441608 Pulling fs layer d9839d085a06 Pulling fs layer f73a463460fa Pulling fs layer 181573c3afd9 Waiting 0bfd3aaf5d6c Pulling fs layer 5828e4fe52be Waiting fba3b8441608 Waiting 5fe13c15ba37 Pulling fs layer f73a463460fa Waiting faaa56d14bf4 Pulling fs layer 0bfd3aaf5d6c Waiting f56fa6e6c695 Pulling fs layer edfefb18be4b Pulling fs layer f56fa6e6c695 Waiting 31e352740f53 Downloading [> ] 48.11kB/3.398MB 31e352740f53 Downloading [> ] 48.11kB/3.398MB 31e352740f53 Downloading [> ] 48.11kB/3.398MB e4a126ec7ec2 Downloading [==================================================>] 293B/293B e4a126ec7ec2 Verifying Checksum e4a126ec7ec2 Download complete 31e352740f53 Pulling fs layer 286d4cd18b47 Pulling fs layer 78a2c6c1f043 Pulling fs layer 4a2329ce180a Pulling fs layer 69a5193d8be5 Pulling fs layer 24c3bee7923c Pulling fs layer 04266f1fe01b Pulling fs layer 4a2329ce180a Waiting 69a5193d8be5 Waiting 24c3bee7923c Waiting 04266f1fe01b Waiting 31e352740f53 Downloading [> ] 48.11kB/3.398MB 286d4cd18b47 Waiting 84b15477ea97 Downloading [> ] 539.6kB/73.93MB 84b15477ea97 Downloading [> ] 539.6kB/73.93MB 84b15477ea97 Downloading [> ] 539.6kB/73.93MB 78a2c6c1f043 Waiting 7e6a4282241c Downloading [=> ] 3.051kB/127.4kB 31e352740f53 Pulling fs layer 84b15477ea97 Pulling fs layer 493f88a6b82e Pulling fs layer d41765bdaaef Pulling fs layer 31e352740f53 Downloading [> ] 48.11kB/3.398MB 84b15477ea97 Downloading [> ] 539.6kB/73.93MB f251a7d099f8 Pulling fs layer 3595edb9cc0c Pulling fs layer ffe123cfbf03 Pulling fs layer d41765bdaaef Waiting f251a7d099f8 Waiting ffe123cfbf03 Waiting 7e6a4282241c Downloading [==================================================>] 127.4kB/127.4kB 7e6a4282241c Verifying Checksum 7e6a4282241c Download complete 9fa9226be034 Pulling fs layer 1617e25568b2 Pulling fs layer 1b30b2d9318a Pulling fs layer f6d077cd6629 Pulling fs layer d6c6c26dc98a Pulling fs layer 60290e82ca2c Pulling fs layer 78605ea207be Pulling fs layer 869e11012e0e Pulling fs layer c4426427fcc3 Pulling fs layer d247d9811eae Pulling fs layer f1fb904ca1b9 Pulling fs layer 78605ea207be Waiting 9fa9226be034 Waiting 1617e25568b2 Waiting 869e11012e0e Waiting 1b30b2d9318a Waiting 1e12dd793eba Pulling fs layer d6c6c26dc98a Waiting d247d9811eae Waiting f1fb904ca1b9 Waiting c4426427fcc3 Waiting 1e12dd793eba Waiting 60290e82ca2c Waiting f6d077cd6629 Waiting 10ac4908093d Pulling fs layer 44779101e748 Pulling fs layer a721db3e3f3d Pulling fs layer 1850a929b84a Pulling fs layer 397a918c7da3 Pulling fs layer 806be17e856d Pulling fs layer 634de6c90876 Pulling fs layer cd00854cfb1a Pulling fs layer a721db3e3f3d Waiting 397a918c7da3 Waiting 806be17e856d Waiting 1850a929b84a Waiting cd00854cfb1a Waiting 634de6c90876 Waiting 10ac4908093d Waiting 44779101e748 Waiting 4c1f466ebe00 Downloading [==================================================>] 1.143kB/1.143kB 4c1f466ebe00 Download complete 31e352740f53 Verifying Checksum 31e352740f53 Download complete 31e352740f53 Verifying Checksum 31e352740f53 Download complete 31e352740f53 Verifying Checksum 31e352740f53 Download complete 31e352740f53 Verifying Checksum 31e352740f53 Download complete 31e352740f53 Verifying Checksum 31e352740f53 Download complete 31e352740f53 Extracting [> ] 65.54kB/3.398MB 31e352740f53 Extracting [> ] 65.54kB/3.398MB 31e352740f53 Extracting [> ] 65.54kB/3.398MB 31e352740f53 Extracting [> ] 65.54kB/3.398MB 31e352740f53 Extracting [> ] 65.54kB/3.398MB 4abcf2066143 Pulling fs layer 4abcf2066143 Waiting 2d9c3489ff61 Pulling fs layer c6f70df7a645 Pulling fs layer 2d9c3489ff61 Waiting 7bb182c52b4c Pulling fs layer 17afd0a389e3 Pulling fs layer 7bb182c52b4c Waiting 93a6eb5bf657 Pulling fs layer 280250a24ab7 Pulling fs layer 0f4bc59b85b3 Pulling fs layer 17afd0a389e3 Waiting 93a6eb5bf657 Waiting c919ef978278 Pulling fs layer 280250a24ab7 Waiting 56de912e3e14 Pulling fs layer 0f4bc59b85b3 Waiting c919ef978278 Waiting 56de912e3e14 Waiting a4d16507c3db Downloading [==================================================>] 1.113kB/1.113kB a4d16507c3db Verifying Checksum a4d16507c3db Download complete 9691625c9fe6 Downloading [> ] 539.6kB/84.46MB 773cf06e34cf Downloading [==================================================>] 300B/300B 773cf06e34cf Verifying Checksum 773cf06e34cf Download complete 84b15477ea97 Downloading [====> ] 6.487MB/73.93MB 84b15477ea97 Downloading [====> ] 6.487MB/73.93MB 84b15477ea97 Downloading [====> ] 6.487MB/73.93MB 84b15477ea97 Downloading [====> ] 6.487MB/73.93MB baa7adcb99b8 Downloading [> ] 539.6kB/159.1MB 9691625c9fe6 Downloading [==> ] 4.324MB/84.46MB 31e352740f53 Extracting [=====> ] 393.2kB/3.398MB 31e352740f53 Extracting [=====> ] 393.2kB/3.398MB 31e352740f53 Extracting [=====> ] 393.2kB/3.398MB 31e352740f53 Extracting [=====> ] 393.2kB/3.398MB 31e352740f53 Extracting [=====> ] 393.2kB/3.398MB 84b15477ea97 Downloading [========> ] 12.43MB/73.93MB 84b15477ea97 Downloading [========> ] 12.43MB/73.93MB 84b15477ea97 Downloading [========> ] 12.43MB/73.93MB 84b15477ea97 Downloading [========> ] 12.43MB/73.93MB baa7adcb99b8 Downloading [=> ] 5.406MB/159.1MB 22ebf0e44c85 Pulling fs layer 00b33c871d26 Pulling fs layer 6b11e56702ad Pulling fs layer 53d69aa7d3fc Pulling fs layer a3ab11953ef9 Pulling fs layer 91ef9543149d Pulling fs layer 2ec4f59af178 Pulling fs layer 8b7e81cd5ef1 Pulling fs layer c52916c1316e Pulling fs layer 7a1cb9ad7f75 Pulling fs layer 0a92c7dea7af Pulling fs layer 22ebf0e44c85 Waiting 00b33c871d26 Waiting 6b11e56702ad Waiting 53d69aa7d3fc Waiting a3ab11953ef9 Waiting 91ef9543149d Waiting 2ec4f59af178 Waiting 0a92c7dea7af Waiting 8b7e81cd5ef1 Waiting 7a1cb9ad7f75 Waiting c52916c1316e Waiting 31e352740f53 Extracting [=======================================> ] 2.687MB/3.398MB 31e352740f53 Extracting [=======================================> ] 2.687MB/3.398MB 31e352740f53 Extracting [=======================================> ] 2.687MB/3.398MB 31e352740f53 Extracting [=======================================> ] 2.687MB/3.398MB 31e352740f53 Extracting [=======================================> ] 2.687MB/3.398MB 9691625c9fe6 Downloading [=======> ] 12.43MB/84.46MB 84b15477ea97 Downloading [=============> ] 20MB/73.93MB 84b15477ea97 Downloading [=============> ] 20MB/73.93MB 84b15477ea97 Downloading [=============> ] 20MB/73.93MB 84b15477ea97 Downloading [=============> ] 20MB/73.93MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 22ebf0e44c85 Pulling fs layer 00b33c871d26 Pulling fs layer 6b11e56702ad Pulling fs layer 53d69aa7d3fc Pulling fs layer a3ab11953ef9 Pulling fs layer 91ef9543149d Pulling fs layer 2ec4f59af178 Pulling fs layer 8b7e81cd5ef1 Pulling fs layer c52916c1316e Pulling fs layer d93f69e96600 Pulling fs layer bbb9d15c45a1 Pulling fs layer 22ebf0e44c85 Waiting 00b33c871d26 Waiting 6b11e56702ad Waiting 53d69aa7d3fc Waiting a3ab11953ef9 Waiting 91ef9543149d Waiting 2ec4f59af178 Waiting 8b7e81cd5ef1 Waiting c52916c1316e Waiting d93f69e96600 Waiting bbb9d15c45a1 Waiting baa7adcb99b8 Downloading [===> ] 11.35MB/159.1MB 9691625c9fe6 Downloading [=============> ] 22.71MB/84.46MB 84b15477ea97 Downloading [====================> ] 30.28MB/73.93MB 84b15477ea97 Downloading [====================> ] 30.28MB/73.93MB 84b15477ea97 Downloading [====================> ] 30.28MB/73.93MB 84b15477ea97 Downloading [====================> ] 30.28MB/73.93MB 31e352740f53 Pull complete 31e352740f53 Pull complete 31e352740f53 Pull complete 31e352740f53 Pull complete 31e352740f53 Pull complete baa7adcb99b8 Downloading [======> ] 21.63MB/159.1MB 9691625c9fe6 Downloading [===================> ] 32.44MB/84.46MB 84b15477ea97 Downloading [============================> ] 41.63MB/73.93MB 84b15477ea97 Downloading [============================> ] 41.63MB/73.93MB 84b15477ea97 Downloading [============================> ] 41.63MB/73.93MB 84b15477ea97 Downloading [============================> ] 41.63MB/73.93MB baa7adcb99b8 Downloading [=========> ] 31.36MB/159.1MB 9691625c9fe6 Downloading [=========================> ] 43.79MB/84.46MB 84b15477ea97 Downloading [===================================> ] 52.44MB/73.93MB 84b15477ea97 Downloading [===================================> ] 52.44MB/73.93MB 84b15477ea97 Downloading [===================================> ] 52.44MB/73.93MB 84b15477ea97 Downloading [===================================> ] 52.44MB/73.93MB baa7adcb99b8 Downloading [=============> ] 42.17MB/159.1MB 9691625c9fe6 Downloading [================================> ] 54.07MB/84.46MB 84b15477ea97 Downloading [===========================================> ] 63.8MB/73.93MB 84b15477ea97 Downloading [===========================================> ] 63.8MB/73.93MB 84b15477ea97 Downloading [===========================================> ] 63.8MB/73.93MB 84b15477ea97 Downloading [===========================================> ] 63.8MB/73.93MB baa7adcb99b8 Downloading [================> ] 53.53MB/159.1MB 9691625c9fe6 Downloading [=====================================> ] 63.26MB/84.46MB 84b15477ea97 Downloading [================================================> ] 71.91MB/73.93MB 84b15477ea97 Downloading [================================================> ] 71.91MB/73.93MB 84b15477ea97 Downloading [================================================> ] 71.91MB/73.93MB 84b15477ea97 Downloading [================================================> ] 71.91MB/73.93MB 84b15477ea97 Verifying Checksum 84b15477ea97 Verifying Checksum 84b15477ea97 Download complete 84b15477ea97 Download complete 84b15477ea97 Download complete 84b15477ea97 Download complete baa7adcb99b8 Downloading [====================> ] 63.8MB/159.1MB eda20f6c55e6 Downloading [==================================================>] 1.152kB/1.152kB eda20f6c55e6 Verifying Checksum eda20f6c55e6 Download complete 699508c11178 Downloading [==================================================>] 1.123kB/1.123kB 699508c11178 Verifying Checksum 699508c11178 Download complete 9691625c9fe6 Downloading [==========================================> ] 71.91MB/84.46MB 181573c3afd9 Downloading [> ] 343kB/32.98MB baa7adcb99b8 Downloading [=======================> ] 73.53MB/159.1MB 84b15477ea97 Extracting [> ] 557.1kB/73.93MB 84b15477ea97 Extracting [> ] 557.1kB/73.93MB 84b15477ea97 Extracting [> ] 557.1kB/73.93MB 84b15477ea97 Extracting [> ] 557.1kB/73.93MB 9691625c9fe6 Downloading [==============================================> ] 78.4MB/84.46MB 181573c3afd9 Downloading [===========> ] 7.568MB/32.98MB 9691625c9fe6 Verifying Checksum baa7adcb99b8 Downloading [==========================> ] 84.34MB/159.1MB 5828e4fe52be Downloading [==================================================>] 1.075kB/1.075kB 5828e4fe52be Verifying Checksum 5828e4fe52be Download complete fba3b8441608 Downloading [============================> ] 3.053kB/5.323kB fba3b8441608 Downloading [==================================================>] 5.323kB/5.323kB fba3b8441608 Verifying Checksum fba3b8441608 Download complete 84b15477ea97 Extracting [===> ] 4.456MB/73.93MB 84b15477ea97 Extracting [===> ] 4.456MB/73.93MB 84b15477ea97 Extracting [===> ] 4.456MB/73.93MB 84b15477ea97 Extracting [===> ] 4.456MB/73.93MB 181573c3afd9 Downloading [=========================> ] 16.86MB/32.98MB d9839d085a06 Downloading [============================> ] 3.053kB/5.308kB d9839d085a06 Downloading [==================================================>] 5.308kB/5.308kB d9839d085a06 Verifying Checksum d9839d085a06 Download complete f73a463460fa Downloading [==================================================>] 1.034kB/1.034kB f73a463460fa Verifying Checksum f73a463460fa Download complete baa7adcb99b8 Downloading [=============================> ] 94.62MB/159.1MB 0bfd3aaf5d6c Downloading [==================================================>] 1.035kB/1.035kB 0bfd3aaf5d6c Verifying Checksum 0bfd3aaf5d6c Download complete 5fe13c15ba37 Downloading [==========> ] 3.052kB/13.9kB 5fe13c15ba37 Downloading [==================================================>] 13.9kB/13.9kB 5fe13c15ba37 Verifying Checksum 5fe13c15ba37 Download complete 84b15477ea97 Extracting [=====> ] 8.356MB/73.93MB 84b15477ea97 Extracting [=====> ] 8.356MB/73.93MB 84b15477ea97 Extracting [=====> ] 8.356MB/73.93MB 84b15477ea97 Extracting [=====> ] 8.356MB/73.93MB faaa56d14bf4 Downloading [===========> ] 3.052kB/13.78kB faaa56d14bf4 Download complete 181573c3afd9 Downloading [======================================> ] 25.46MB/32.98MB f56fa6e6c695 Downloading [==================================================>] 2.239kB/2.239kB f56fa6e6c695 Download complete edfefb18be4b Downloading [==================================================>] 2.24kB/2.24kB edfefb18be4b Verifying Checksum edfefb18be4b Download complete baa7adcb99b8 Downloading [=================================> ] 106MB/159.1MB 286d4cd18b47 Downloading [> ] 539.6kB/180.3MB 181573c3afd9 Verifying Checksum 181573c3afd9 Download complete 78a2c6c1f043 Downloading [=> ] 3.052kB/84.13kB 78a2c6c1f043 Downloading [==================================================>] 84.13kB/84.13kB 78a2c6c1f043 Verifying Checksum 78a2c6c1f043 Download complete 84b15477ea97 Extracting [========> ] 12.26MB/73.93MB 84b15477ea97 Extracting [========> ] 12.26MB/73.93MB 84b15477ea97 Extracting [========> ] 12.26MB/73.93MB 84b15477ea97 Extracting [========> ] 12.26MB/73.93MB 4a2329ce180a Downloading [==================================================>] 92B/92B 4a2329ce180a Verifying Checksum 4a2329ce180a Download complete 69a5193d8be5 Downloading [==================================================>] 92B/92B 69a5193d8be5 Download complete baa7adcb99b8 Downloading [====================================> ] 115.7MB/159.1MB 24c3bee7923c Downloading [==================================================>] 299B/299B 24c3bee7923c Verifying Checksum 24c3bee7923c Download complete 286d4cd18b47 Downloading [=> ] 6.487MB/180.3MB 04266f1fe01b Downloading [> ] 539.6kB/246.5MB 84b15477ea97 Extracting [==========> ] 15.04MB/73.93MB 84b15477ea97 Extracting [==========> ] 15.04MB/73.93MB 84b15477ea97 Extracting [==========> ] 15.04MB/73.93MB 84b15477ea97 Extracting [==========> ] 15.04MB/73.93MB baa7adcb99b8 Downloading [=======================================> ] 124.4MB/159.1MB 286d4cd18b47 Downloading [===> ] 14.06MB/180.3MB 04266f1fe01b Downloading [> ] 4.865MB/246.5MB 84b15477ea97 Extracting [===========> ] 17.27MB/73.93MB 84b15477ea97 Extracting [===========> ] 17.27MB/73.93MB 84b15477ea97 Extracting [===========> ] 17.27MB/73.93MB 84b15477ea97 Extracting [===========> ] 17.27MB/73.93MB 286d4cd18b47 Downloading [======> ] 22.71MB/180.3MB baa7adcb99b8 Downloading [==========================================> ] 134.1MB/159.1MB 04266f1fe01b Downloading [=> ] 9.19MB/246.5MB 84b15477ea97 Extracting [=============> ] 19.5MB/73.93MB 84b15477ea97 Extracting [=============> ] 19.5MB/73.93MB 84b15477ea97 Extracting [=============> ] 19.5MB/73.93MB 84b15477ea97 Extracting [=============> ] 19.5MB/73.93MB 286d4cd18b47 Downloading [========> ] 29.74MB/180.3MB baa7adcb99b8 Downloading [============================================> ] 142.2MB/159.1MB 04266f1fe01b Downloading [===> ] 16.76MB/246.5MB 286d4cd18b47 Downloading [==========> ] 38.93MB/180.3MB 84b15477ea97 Extracting [==============> ] 21.17MB/73.93MB 84b15477ea97 Extracting [==============> ] 21.17MB/73.93MB 84b15477ea97 Extracting [==============> ] 21.17MB/73.93MB 84b15477ea97 Extracting [==============> ] 21.17MB/73.93MB baa7adcb99b8 Downloading [==============================================> ] 148.7MB/159.1MB 04266f1fe01b Downloading [====> ] 23.25MB/246.5MB 286d4cd18b47 Downloading [=============> ] 48.66MB/180.3MB 84b15477ea97 Extracting [===============> ] 23.4MB/73.93MB 84b15477ea97 Extracting [===============> ] 23.4MB/73.93MB 84b15477ea97 Extracting [===============> ] 23.4MB/73.93MB 84b15477ea97 Extracting [===============> ] 23.4MB/73.93MB 04266f1fe01b Downloading [======> ] 34.06MB/246.5MB 286d4cd18b47 Downloading [================> ] 58.93MB/180.3MB 84b15477ea97 Extracting [================> ] 25.07MB/73.93MB 84b15477ea97 Extracting [================> ] 25.07MB/73.93MB 84b15477ea97 Extracting [================> ] 25.07MB/73.93MB 84b15477ea97 Extracting [================> ] 25.07MB/73.93MB 04266f1fe01b Downloading [=========> ] 44.87MB/246.5MB 286d4cd18b47 Downloading [===================> ] 70.83MB/180.3MB 84b15477ea97 Extracting [==================> ] 27.85MB/73.93MB 84b15477ea97 Extracting [==================> ] 27.85MB/73.93MB 84b15477ea97 Extracting [==================> ] 27.85MB/73.93MB 84b15477ea97 Extracting [==================> ] 27.85MB/73.93MB 04266f1fe01b Downloading [==========> ] 53.53MB/246.5MB 286d4cd18b47 Downloading [======================> ] 80.56MB/180.3MB 84b15477ea97 Extracting [====================> ] 30.08MB/73.93MB 84b15477ea97 Extracting [====================> ] 30.08MB/73.93MB 84b15477ea97 Extracting [====================> ] 30.08MB/73.93MB 84b15477ea97 Extracting [====================> ] 30.08MB/73.93MB 04266f1fe01b Downloading [============> ] 62.72MB/246.5MB 286d4cd18b47 Downloading [========================> ] 89.75MB/180.3MB 84b15477ea97 Extracting [======================> ] 32.87MB/73.93MB 84b15477ea97 Extracting [======================> ] 32.87MB/73.93MB 84b15477ea97 Extracting [======================> ] 32.87MB/73.93MB 84b15477ea97 Extracting [======================> ] 32.87MB/73.93MB 04266f1fe01b Downloading [==============> ] 72.99MB/246.5MB 286d4cd18b47 Downloading [============================> ] 101.6MB/180.3MB 84b15477ea97 Extracting [========================> ] 35.65MB/73.93MB 84b15477ea97 Extracting [========================> ] 35.65MB/73.93MB 84b15477ea97 Extracting [========================> ] 35.65MB/73.93MB 84b15477ea97 Extracting [========================> ] 35.65MB/73.93MB 04266f1fe01b Downloading [=================> ] 84.34MB/246.5MB 286d4cd18b47 Downloading [===============================> ] 112.5MB/180.3MB 84b15477ea97 Extracting [=========================> ] 38.44MB/73.93MB 84b15477ea97 Extracting [=========================> ] 38.44MB/73.93MB 84b15477ea97 Extracting [=========================> ] 38.44MB/73.93MB 84b15477ea97 Extracting [=========================> ] 38.44MB/73.93MB 04266f1fe01b Downloading [===================> ] 94.08MB/246.5MB 286d4cd18b47 Downloading [==================================> ] 122.7MB/180.3MB 84b15477ea97 Extracting [===========================> ] 41.22MB/73.93MB 84b15477ea97 Extracting [===========================> ] 41.22MB/73.93MB 84b15477ea97 Extracting [===========================> ] 41.22MB/73.93MB 84b15477ea97 Extracting [===========================> ] 41.22MB/73.93MB 04266f1fe01b Downloading [=====================> ] 103.8MB/246.5MB 286d4cd18b47 Downloading [====================================> ] 132.5MB/180.3MB 84b15477ea97 Extracting [=============================> ] 43.45MB/73.93MB 84b15477ea97 Extracting [=============================> ] 43.45MB/73.93MB 84b15477ea97 Extracting [=============================> ] 43.45MB/73.93MB 84b15477ea97 Extracting [=============================> ] 43.45MB/73.93MB 04266f1fe01b Downloading [======================> ] 112.5MB/246.5MB 286d4cd18b47 Downloading [=======================================> ] 141.1MB/180.3MB 04266f1fe01b Downloading [========================> ] 121.7MB/246.5MB 84b15477ea97 Extracting [==============================> ] 45.68MB/73.93MB 84b15477ea97 Extracting [==============================> ] 45.68MB/73.93MB 84b15477ea97 Extracting [==============================> ] 45.68MB/73.93MB 84b15477ea97 Extracting [==============================> ] 45.68MB/73.93MB 286d4cd18b47 Downloading [=========================================> ] 150.3MB/180.3MB 84b15477ea97 Extracting [================================> ] 47.91MB/73.93MB 84b15477ea97 Extracting [================================> ] 47.91MB/73.93MB 84b15477ea97 Extracting [================================> ] 47.91MB/73.93MB 84b15477ea97 Extracting [================================> ] 47.91MB/73.93MB 04266f1fe01b Downloading [==========================> ] 129.8MB/246.5MB 286d4cd18b47 Downloading [============================================> ] 159.5MB/180.3MB 04266f1fe01b Downloading [===========================> ] 137.9MB/246.5MB 84b15477ea97 Extracting [==================================> ] 50.69MB/73.93MB 84b15477ea97 Extracting [==================================> ] 50.69MB/73.93MB 84b15477ea97 Extracting [==================================> ] 50.69MB/73.93MB 84b15477ea97 Extracting [==================================> ] 50.69MB/73.93MB 286d4cd18b47 Downloading [==============================================> ] 168.7MB/180.3MB 04266f1fe01b Downloading [=============================> ] 146.5MB/246.5MB 84b15477ea97 Extracting [===================================> ] 52.92MB/73.93MB 84b15477ea97 Extracting [===================================> ] 52.92MB/73.93MB 84b15477ea97 Extracting [===================================> ] 52.92MB/73.93MB 84b15477ea97 Extracting [===================================> ] 52.92MB/73.93MB 286d4cd18b47 Downloading [================================================> ] 174.6MB/180.3MB 286d4cd18b47 Verifying Checksum 286d4cd18b47 Download complete 04266f1fe01b Downloading [===============================> ] 154.6MB/246.5MB 84b15477ea97 Extracting [====================================> ] 54.59MB/73.93MB 84b15477ea97 Extracting [====================================> ] 54.59MB/73.93MB 84b15477ea97 Extracting [====================================> ] 54.59MB/73.93MB 84b15477ea97 Extracting [====================================> ] 54.59MB/73.93MB 04266f1fe01b Downloading [================================> ] 161.7MB/246.5MB 84b15477ea97 Extracting [======================================> ] 57.38MB/73.93MB 84b15477ea97 Extracting [======================================> ] 57.38MB/73.93MB 84b15477ea97 Extracting [======================================> ] 57.38MB/73.93MB 84b15477ea97 Extracting [======================================> ] 57.38MB/73.93MB 286d4cd18b47 Extracting [> ] 557.1kB/180.3MB 04266f1fe01b Downloading [==================================> ] 170.3MB/246.5MB 84b15477ea97 Extracting [========================================> ] 59.6MB/73.93MB 84b15477ea97 Extracting [========================================> ] 59.6MB/73.93MB 84b15477ea97 Extracting [========================================> ] 59.6MB/73.93MB 84b15477ea97 Extracting [========================================> ] 59.6MB/73.93MB 286d4cd18b47 Extracting [> ] 3.342MB/180.3MB 04266f1fe01b Downloading [====================================> ] 178.4MB/246.5MB 84b15477ea97 Extracting [==========================================> ] 62.95MB/73.93MB 84b15477ea97 Extracting [==========================================> ] 62.95MB/73.93MB 84b15477ea97 Extracting [==========================================> ] 62.95MB/73.93MB 84b15477ea97 Extracting [==========================================> ] 62.95MB/73.93MB 286d4cd18b47 Extracting [=> ] 5.014MB/180.3MB 04266f1fe01b Downloading [======================================> ] 188.2MB/246.5MB 84b15477ea97 Extracting [=============================================> ] 66.85MB/73.93MB 84b15477ea97 Extracting [=============================================> ] 66.85MB/73.93MB 84b15477ea97 Extracting [=============================================> ] 66.85MB/73.93MB 84b15477ea97 Extracting [=============================================> ] 66.85MB/73.93MB 286d4cd18b47 Extracting [===> ] 12.26MB/180.3MB 04266f1fe01b Downloading [=======================================> ] 196.8MB/246.5MB 84b15477ea97 Extracting [===============================================> ] 70.19MB/73.93MB 84b15477ea97 Extracting [===============================================> ] 70.19MB/73.93MB 84b15477ea97 Extracting [===============================================> ] 70.19MB/73.93MB 84b15477ea97 Extracting [===============================================> ] 70.19MB/73.93MB 286d4cd18b47 Extracting [=====> ] 19.5MB/180.3MB 493f88a6b82e Downloading [==================================================>] 291B/291B 493f88a6b82e Verifying Checksum 493f88a6b82e Download complete 04266f1fe01b Downloading [=========================================> ] 206MB/246.5MB 286d4cd18b47 Extracting [=======> ] 25.62MB/180.3MB 84b15477ea97 Extracting [================================================> ] 72.42MB/73.93MB 84b15477ea97 Extracting [================================================> ] 72.42MB/73.93MB 84b15477ea97 Extracting [================================================> ] 72.42MB/73.93MB 84b15477ea97 Extracting [================================================> ] 72.42MB/73.93MB 286d4cd18b47 Extracting [=========> ] 32.87MB/180.3MB 84b15477ea97 Extracting [==================================================>] 73.93MB/73.93MB 84b15477ea97 Extracting [==================================================>] 73.93MB/73.93MB 84b15477ea97 Extracting [==================================================>] 73.93MB/73.93MB 84b15477ea97 Extracting [==================================================>] 73.93MB/73.93MB 286d4cd18b47 Extracting [===========> ] 42.89MB/180.3MB 84b15477ea97 Pull complete 84b15477ea97 Pull complete 84b15477ea97 Pull complete 84b15477ea97 Pull complete 493f88a6b82e Extracting [==================================================>] 291B/291B 773cf06e34cf Extracting [==================================================>] 300B/300B e4a126ec7ec2 Extracting [==================================================>] 293B/293B e4a126ec7ec2 Extracting [==================================================>] 293B/293B 493f88a6b82e Extracting [==================================================>] 291B/291B 773cf06e34cf Extracting [==================================================>] 300B/300B 286d4cd18b47 Extracting [==============> ] 54.03MB/180.3MB 181573c3afd9 Extracting [> ] 360.4kB/32.98MB e4a126ec7ec2 Pull complete 493f88a6b82e Pull complete 773cf06e34cf Pull complete 7e6a4282241c Extracting [============> ] 32.77kB/127.4kB 7e6a4282241c Extracting [==================================================>] 127.4kB/127.4kB 181573c3afd9 Extracting [=====> ] 3.604MB/32.98MB 286d4cd18b47 Extracting [==================> ] 66.29MB/180.3MB 286d4cd18b47 Extracting [=====================> ] 76.32MB/180.3MB 181573c3afd9 Extracting [==========> ] 6.849MB/32.98MB 7e6a4282241c Pull complete 4c1f466ebe00 Extracting [==================================================>] 1.143kB/1.143kB 4c1f466ebe00 Extracting [==================================================>] 1.143kB/1.143kB 181573c3afd9 Extracting [=============> ] 8.651MB/32.98MB 286d4cd18b47 Extracting [======================> ] 81.89MB/180.3MB 4c1f466ebe00 Pull complete 181573c3afd9 Extracting [==================> ] 12.26MB/32.98MB 286d4cd18b47 Extracting [========================> ] 88.01MB/180.3MB d41765bdaaef Downloading [=> ] 3.051kB/127kB d41765bdaaef Downloading [==================================================>] 127kB/127kB d41765bdaaef Verifying Checksum d41765bdaaef Download complete d41765bdaaef Extracting [============> ] 32.77kB/127kB 04266f1fe01b Downloading [==========================================> ] 211.9MB/246.5MB d41765bdaaef Extracting [==================================================>] 127kB/127kB f251a7d099f8 Downloading [==================================================>] 1.327kB/1.327kB f251a7d099f8 Verifying Checksum f251a7d099f8 Download complete 3595edb9cc0c Downloading [> ] 539.6kB/98.32MB 181573c3afd9 Extracting [====================> ] 13.7MB/32.98MB 04266f1fe01b Downloading [============================================> ] 217.9MB/246.5MB 286d4cd18b47 Extracting [=========================> ] 90.24MB/180.3MB 9691625c9fe6 Extracting [> ] 557.1kB/84.46MB 3595edb9cc0c Downloading [==> ] 4.865MB/98.32MB 04266f1fe01b Downloading [=============================================> ] 223.8MB/246.5MB 9691625c9fe6 Extracting [=> ] 3.342MB/84.46MB 181573c3afd9 Extracting [======================> ] 14.78MB/32.98MB 286d4cd18b47 Extracting [=========================> ] 91.91MB/180.3MB d41765bdaaef Pull complete f251a7d099f8 Extracting [==================================================>] 1.327kB/1.327kB f251a7d099f8 Extracting [==================================================>] 1.327kB/1.327kB 3595edb9cc0c Downloading [======> ] 12.98MB/98.32MB 04266f1fe01b Downloading [==============================================> ] 231.4MB/246.5MB 9691625c9fe6 Extracting [====> ] 7.799MB/84.46MB 286d4cd18b47 Extracting [=========================> ] 93.59MB/180.3MB 181573c3afd9 Extracting [=========================> ] 16.58MB/32.98MB 3595edb9cc0c Downloading [=========> ] 18.38MB/98.32MB 04266f1fe01b Downloading [================================================> ] 237.4MB/246.5MB 9691625c9fe6 Extracting [=======> ] 12.26MB/84.46MB 181573c3afd9 Extracting [===========================> ] 18.02MB/32.98MB 286d4cd18b47 Extracting [==========================> ] 95.81MB/180.3MB f251a7d099f8 Pull complete 3595edb9cc0c Downloading [=============> ] 26.49MB/98.32MB 04266f1fe01b Downloading [=================================================> ] 245.5MB/246.5MB 04266f1fe01b Verifying Checksum 04266f1fe01b Download complete 9691625c9fe6 Extracting [=========> ] 16.15MB/84.46MB ffe123cfbf03 Downloading [==================================================>] 1.297kB/1.297kB ffe123cfbf03 Verifying Checksum ffe123cfbf03 Download complete 181573c3afd9 Extracting [==============================> ] 19.82MB/32.98MB 286d4cd18b47 Extracting [===========================> ] 98.04MB/180.3MB 9fa9226be034 Downloading [> ] 15.3kB/783kB 3595edb9cc0c Downloading [================> ] 32.44MB/98.32MB 9fa9226be034 Downloading [==================================================>] 783kB/783kB 9fa9226be034 Verifying Checksum 9fa9226be034 Download complete 9fa9226be034 Extracting [==> ] 32.77kB/783kB 1617e25568b2 Downloading [=> ] 15.3kB/480.9kB 1617e25568b2 Downloading [==================================================>] 480.9kB/480.9kB 1617e25568b2 Download complete 9691625c9fe6 Extracting [============> ] 21.17MB/84.46MB 286d4cd18b47 Extracting [===========================> ] 99.16MB/180.3MB 181573c3afd9 Extracting [===============================> ] 20.91MB/32.98MB 1b30b2d9318a Downloading [> ] 539.6kB/55.45MB 3595edb9cc0c Downloading [===================> ] 38.39MB/98.32MB 9691625c9fe6 Extracting [===============> ] 26.18MB/84.46MB 9fa9226be034 Extracting [=======================> ] 360.4kB/783kB 286d4cd18b47 Extracting [===========================> ] 100.8MB/180.3MB 1b30b2d9318a Downloading [===> ] 4.324MB/55.45MB 9fa9226be034 Extracting [==================================================>] 783kB/783kB 9fa9226be034 Extracting [==================================================>] 783kB/783kB 3595edb9cc0c Downloading [======================> ] 44.87MB/98.32MB 181573c3afd9 Extracting [================================> ] 21.63MB/32.98MB 9691625c9fe6 Extracting [==================> ] 31.75MB/84.46MB 286d4cd18b47 Extracting [============================> ] 103.1MB/180.3MB 3595edb9cc0c Downloading [===========================> ] 54.07MB/98.32MB 1b30b2d9318a Downloading [=======> ] 8.65MB/55.45MB 9fa9226be034 Pull complete 1617e25568b2 Extracting [===> ] 32.77kB/480.9kB 181573c3afd9 Extracting [=================================> ] 22.35MB/32.98MB 9691625c9fe6 Extracting [======================> ] 38.44MB/84.46MB 286d4cd18b47 Extracting [=============================> ] 105.3MB/180.3MB 3595edb9cc0c Downloading [==============================> ] 60.55MB/98.32MB 1b30b2d9318a Downloading [=============> ] 14.6MB/55.45MB 9691625c9fe6 Extracting [========================> ] 41.22MB/84.46MB 181573c3afd9 Extracting [===================================> ] 23.43MB/32.98MB 3595edb9cc0c Downloading [==================================> ] 67.04MB/98.32MB 1b30b2d9318a Downloading [==================> ] 20MB/55.45MB 286d4cd18b47 Extracting [=============================> ] 107MB/180.3MB 1617e25568b2 Extracting [==================================> ] 327.7kB/480.9kB 9691625c9fe6 Extracting [==========================> ] 45.12MB/84.46MB 3595edb9cc0c Downloading [======================================> ] 75.15MB/98.32MB 1b30b2d9318a Downloading [=======================> ] 25.95MB/55.45MB 286d4cd18b47 Extracting [==============================> ] 108.6MB/180.3MB 1617e25568b2 Extracting [===============================================> ] 458.8kB/480.9kB 181573c3afd9 Extracting [====================================> ] 23.79MB/32.98MB 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB 9691625c9fe6 Extracting [=============================> ] 50.14MB/84.46MB 3595edb9cc0c Downloading [==========================================> ] 82.72MB/98.32MB 1b30b2d9318a Downloading [=============================> ] 32.98MB/55.45MB 286d4cd18b47 Extracting [==============================> ] 110.9MB/180.3MB 181573c3afd9 Extracting [=====================================> ] 24.51MB/32.98MB 9691625c9fe6 Extracting [================================> ] 55.15MB/84.46MB 3595edb9cc0c Downloading [==============================================> ] 91.37MB/98.32MB 1b30b2d9318a Downloading [=======================================> ] 44.33MB/55.45MB 286d4cd18b47 Extracting [===============================> ] 112MB/180.3MB 1617e25568b2 Pull complete 181573c3afd9 Extracting [======================================> ] 25.59MB/32.98MB 3595edb9cc0c Verifying Checksum 3595edb9cc0c Download complete 9691625c9fe6 Extracting [===================================> ] 59.6MB/84.46MB 1b30b2d9318a Downloading [=============================================> ] 50.82MB/55.45MB f6d077cd6629 Downloading [> ] 506.8kB/50.34MB 286d4cd18b47 Extracting [===============================> ] 113.6MB/180.3MB 1b30b2d9318a Download complete d6c6c26dc98a Downloading [==================================================>] 605B/605B d6c6c26dc98a Verifying Checksum d6c6c26dc98a Download complete 181573c3afd9 Extracting [========================================> ] 26.67MB/32.98MB 60290e82ca2c Downloading [==================================================>] 2.679kB/2.679kB 60290e82ca2c Download complete 9691625c9fe6 Extracting [=====================================> ] 63.5MB/84.46MB f6d077cd6629 Downloading [====> ] 4.062MB/50.34MB 78605ea207be Downloading [================================================> ] 3.011kB/3.089kB 78605ea207be Downloading [==================================================>] 3.089kB/3.089kB 78605ea207be Verifying Checksum 78605ea207be Download complete 869e11012e0e Downloading [=====================================> ] 3.011kB/4.023kB 869e11012e0e Downloading [==================================================>] 4.023kB/4.023kB 869e11012e0e Verifying Checksum 869e11012e0e Download complete c4426427fcc3 Download complete 286d4cd18b47 Extracting [===============================> ] 115.3MB/180.3MB 3595edb9cc0c Extracting [> ] 557.1kB/98.32MB d247d9811eae Downloading [=> ] 3.009kB/139.8kB d247d9811eae Downloading [==================================================>] 139.8kB/139.8kB d247d9811eae Verifying Checksum d247d9811eae Download complete 9691625c9fe6 Extracting [========================================> ] 67.96MB/84.46MB f1fb904ca1b9 Downloading [==================================================>] 100B/100B f1fb904ca1b9 Verifying Checksum f1fb904ca1b9 Download complete f6d077cd6629 Downloading [===========> ] 11.68MB/50.34MB 1e12dd793eba Downloading [==================================================>] 721B/721B 1e12dd793eba Verifying Checksum 1e12dd793eba Download complete 10ac4908093d Downloading [> ] 310.2kB/30.43MB 1b30b2d9318a Extracting [> ] 557.1kB/55.45MB 286d4cd18b47 Extracting [================================> ] 117MB/180.3MB 181573c3afd9 Extracting [=========================================> ] 27.39MB/32.98MB 3595edb9cc0c Extracting [=> ] 3.899MB/98.32MB 9691625c9fe6 Extracting [==========================================> ] 71.3MB/84.46MB f6d077cd6629 Downloading [=================> ] 17.27MB/50.34MB 10ac4908093d Downloading [======> ] 4.046MB/30.43MB 286d4cd18b47 Extracting [================================> ] 118.7MB/180.3MB 1b30b2d9318a Extracting [==> ] 2.785MB/55.45MB 3595edb9cc0c Extracting [====> ] 8.913MB/98.32MB 181573c3afd9 Extracting [============================================> ] 29.2MB/32.98MB 9691625c9fe6 Extracting [============================================> ] 75.2MB/84.46MB f6d077cd6629 Downloading [=======================> ] 23.87MB/50.34MB 10ac4908093d Downloading [=================> ] 10.89MB/30.43MB 3595edb9cc0c Extracting [======> ] 12.81MB/98.32MB 1b30b2d9318a Extracting [===> ] 3.899MB/55.45MB 9691625c9fe6 Extracting [===============================================> ] 79.66MB/84.46MB 286d4cd18b47 Extracting [=================================> ] 120.3MB/180.3MB f6d077cd6629 Downloading [============================> ] 28.44MB/50.34MB 10ac4908093d Downloading [==========================> ] 15.88MB/30.43MB 181573c3afd9 Extracting [=============================================> ] 30.28MB/32.98MB 3595edb9cc0c Extracting [========> ] 17.27MB/98.32MB 9691625c9fe6 Extracting [=================================================> ] 83.56MB/84.46MB f6d077cd6629 Downloading [==================================> ] 35.04MB/50.34MB 9691625c9fe6 Extracting [==================================================>] 84.46MB/84.46MB 286d4cd18b47 Extracting [=================================> ] 122MB/180.3MB 10ac4908093d Downloading [===================================> ] 21.48MB/30.43MB 1b30b2d9318a Extracting [====> ] 5.014MB/55.45MB 181573c3afd9 Extracting [==============================================> ] 31MB/32.98MB 3595edb9cc0c Extracting [===========> ] 22.84MB/98.32MB f6d077cd6629 Downloading [=============================================> ] 45.71MB/50.34MB 10ac4908093d Verifying Checksum 10ac4908093d Download complete f6d077cd6629 Verifying Checksum f6d077cd6629 Download complete 44779101e748 Downloading [==================================================>] 1.744kB/1.744kB 44779101e748 Verifying Checksum 44779101e748 Download complete 286d4cd18b47 Extracting [==================================> ] 125.3MB/180.3MB 1850a929b84a Downloading [==================================================>] 149B/149B 1b30b2d9318a Extracting [=======> ] 7.799MB/55.45MB 1850a929b84a Verifying Checksum a721db3e3f3d Downloading [> ] 64.45kB/5.526MB 1850a929b84a Download complete 397a918c7da3 Downloading [==================================================>] 327B/327B 397a918c7da3 Verifying Checksum 397a918c7da3 Download complete 181573c3afd9 Extracting [=================================================> ] 32.8MB/32.98MB 181573c3afd9 Extracting [==================================================>] 32.98MB/32.98MB 806be17e856d Downloading [> ] 539.6kB/89.72MB 3595edb9cc0c Extracting [===============> ] 30.64MB/98.32MB a721db3e3f3d Verifying Checksum a721db3e3f3d Download complete 634de6c90876 Downloading [===========================================> ] 3.011kB/3.49kB 634de6c90876 Download complete 9691625c9fe6 Pull complete a4d16507c3db Extracting [==================================================>] 1.113kB/1.113kB a4d16507c3db Extracting [==================================================>] 1.113kB/1.113kB 181573c3afd9 Pull complete 1b30b2d9318a Extracting [=========> ] 10.03MB/55.45MB 5828e4fe52be Extracting [==================================================>] 1.075kB/1.075kB cd00854cfb1a Downloading [=====================> ] 3.011kB/6.971kB cd00854cfb1a Downloading [==================================================>] 6.971kB/6.971kB cd00854cfb1a Verifying Checksum cd00854cfb1a Download complete 5828e4fe52be Extracting [==================================================>] 1.075kB/1.075kB 806be17e856d Downloading [===> ] 5.946MB/89.72MB 286d4cd18b47 Extracting [===================================> ] 127.6MB/180.3MB 3595edb9cc0c Extracting [=================> ] 35.09MB/98.32MB 4abcf2066143 Downloading [> ] 48.11kB/3.409MB 10ac4908093d Extracting [> ] 327.7kB/30.43MB 4abcf2066143 Verifying Checksum 4abcf2066143 Download complete 4abcf2066143 Extracting [> ] 65.54kB/3.409MB 2d9c3489ff61 Downloading [==================================================>] 140B/140B 2d9c3489ff61 Verifying Checksum 2d9c3489ff61 Download complete 806be17e856d Downloading [=======> ] 14.06MB/89.72MB 3595edb9cc0c Extracting [====================> ] 39.55MB/98.32MB c6f70df7a645 Downloading [> ] 31.68kB/3.162MB 1b30b2d9318a Extracting [==========> ] 11.7MB/55.45MB 286d4cd18b47 Extracting [====================================> ] 129.8MB/180.3MB 10ac4908093d Extracting [==> ] 1.638MB/30.43MB 806be17e856d Downloading [==========> ] 19.46MB/89.72MB c6f70df7a645 Downloading [=================================================> ] 3.145MB/3.162MB c6f70df7a645 Verifying Checksum c6f70df7a645 Download complete 3595edb9cc0c Extracting [======================> ] 43.45MB/98.32MB 1b30b2d9318a Extracting [============> ] 13.37MB/55.45MB 5828e4fe52be Pull complete fba3b8441608 Extracting [==================================================>] 5.323kB/5.323kB fba3b8441608 Extracting [==================================================>] 5.323kB/5.323kB a4d16507c3db Pull complete 7bb182c52b4c Downloading [> ] 48.06kB/4.333MB 286d4cd18b47 Extracting [====================================> ] 131.5MB/180.3MB 10ac4908093d Extracting [=====> ] 3.604MB/30.43MB api Pulled 4abcf2066143 Extracting [=====> ] 393.2kB/3.409MB 806be17e856d Downloading [==============> ] 26.49MB/89.72MB 3595edb9cc0c Extracting [=======================> ] 46.79MB/98.32MB 7bb182c52b4c Downloading [==================================> ] 2.997MB/4.333MB 10ac4908093d Extracting [=========> ] 5.571MB/30.43MB 7bb182c52b4c Verifying Checksum 7bb182c52b4c Download complete 286d4cd18b47 Extracting [=====================================> ] 133.7MB/180.3MB 1b30b2d9318a Extracting [=============> ] 15.04MB/55.45MB 17afd0a389e3 Downloading [===> ] 3.01kB/47.11kB 17afd0a389e3 Downloading [==================================================>] 47.11kB/47.11kB 17afd0a389e3 Verifying Checksum 17afd0a389e3 Download complete 4abcf2066143 Extracting [================================> ] 2.228MB/3.409MB fba3b8441608 Pull complete 93a6eb5bf657 Downloading [======> ] 3.01kB/23.29kB 93a6eb5bf657 Downloading [==================================================>] 23.29kB/23.29kB 93a6eb5bf657 Verifying Checksum 93a6eb5bf657 Download complete 806be17e856d Downloading [==================> ] 34.06MB/89.72MB 3595edb9cc0c Extracting [==========================> ] 51.81MB/98.32MB d9839d085a06 Extracting [==================================================>] 5.308kB/5.308kB d9839d085a06 Extracting [==================================================>] 5.308kB/5.308kB 280250a24ab7 Downloading [> ] 539.6kB/57.36MB 10ac4908093d Extracting [===========> ] 7.209MB/30.43MB 4abcf2066143 Extracting [==================================================>] 3.409MB/3.409MB 1b30b2d9318a Extracting [===============> ] 16.71MB/55.45MB 286d4cd18b47 Extracting [=====================================> ] 135.9MB/180.3MB 806be17e856d Downloading [======================> ] 40.55MB/89.72MB 3595edb9cc0c Extracting [============================> ] 55.15MB/98.32MB 280250a24ab7 Downloading [====> ] 4.865MB/57.36MB 4abcf2066143 Pull complete 10ac4908093d Extracting [=============> ] 8.192MB/30.43MB 2d9c3489ff61 Extracting [==================================================>] 140B/140B 2d9c3489ff61 Extracting [==================================================>] 140B/140B 1b30b2d9318a Extracting [=================> ] 19.5MB/55.45MB 286d4cd18b47 Extracting [======================================> ] 137.6MB/180.3MB 806be17e856d Downloading [==========================> ] 48.12MB/89.72MB 3595edb9cc0c Extracting [==============================> ] 59.05MB/98.32MB 280250a24ab7 Downloading [========> ] 9.731MB/57.36MB d9839d085a06 Pull complete f73a463460fa Extracting [==================================================>] 1.034kB/1.034kB f73a463460fa Extracting [==================================================>] 1.034kB/1.034kB 10ac4908093d Extracting [==============> ] 8.847MB/30.43MB 806be17e856d Downloading [===============================> ] 56.77MB/89.72MB 286d4cd18b47 Extracting [======================================> ] 138.7MB/180.3MB 1b30b2d9318a Extracting [==================> ] 20.61MB/55.45MB 3595edb9cc0c Extracting [===============================> ] 62.39MB/98.32MB 280250a24ab7 Downloading [============> ] 14.6MB/57.36MB 2d9c3489ff61 Pull complete c6f70df7a645 Extracting [> ] 32.77kB/3.162MB 806be17e856d Downloading [====================================> ] 65.42MB/89.72MB 10ac4908093d Extracting [=================> ] 10.49MB/30.43MB 3595edb9cc0c Extracting [==================================> ] 67.4MB/98.32MB 280250a24ab7 Downloading [==================> ] 21.63MB/57.36MB 286d4cd18b47 Extracting [======================================> ] 140.4MB/180.3MB 806be17e856d Downloading [========================================> ] 71.91MB/89.72MB 1b30b2d9318a Extracting [===================> ] 21.73MB/55.45MB 10ac4908093d Extracting [====================> ] 12.45MB/30.43MB 280250a24ab7 Downloading [========================> ] 28.65MB/57.36MB 3595edb9cc0c Extracting [===================================> ] 70.75MB/98.32MB 806be17e856d Downloading [===========================================> ] 77.86MB/89.72MB 286d4cd18b47 Extracting [=======================================> ] 142MB/180.3MB 10ac4908093d Extracting [=======================> ] 14.42MB/30.43MB 1b30b2d9318a Extracting [======================> ] 24.51MB/55.45MB f73a463460fa Pull complete 280250a24ab7 Downloading [==============================> ] 35.14MB/57.36MB 0bfd3aaf5d6c Extracting [==================================================>] 1.035kB/1.035kB 0bfd3aaf5d6c Extracting [==================================================>] 1.035kB/1.035kB 3595edb9cc0c Extracting [=====================================> ] 74.65MB/98.32MB c6f70df7a645 Extracting [=====> ] 327.7kB/3.162MB 806be17e856d Downloading [==============================================> ] 83.26MB/89.72MB 286d4cd18b47 Extracting [=======================================> ] 143.7MB/180.3MB 1b30b2d9318a Extracting [=======================> ] 26.18MB/55.45MB 10ac4908093d Extracting [==========================> ] 16.06MB/30.43MB 280250a24ab7 Downloading [=====================================> ] 42.71MB/57.36MB 3595edb9cc0c Extracting [======================================> ] 76.32MB/98.32MB c6f70df7a645 Extracting [=================> ] 1.114MB/3.162MB 806be17e856d Downloading [=================================================> ] 89.21MB/89.72MB 806be17e856d Verifying Checksum 806be17e856d Download complete 1b30b2d9318a Extracting [========================> ] 27.3MB/55.45MB 10ac4908093d Extracting [============================> ] 17.37MB/30.43MB 0f4bc59b85b3 Downloading [> ] 506.8kB/50.17MB 280250a24ab7 Downloading [========================================> ] 46.5MB/57.36MB 3595edb9cc0c Extracting [=========================================> ] 80.77MB/98.32MB 286d4cd18b47 Extracting [========================================> ] 145.9MB/180.3MB c6f70df7a645 Extracting [===========================================> ] 2.72MB/3.162MB 1b30b2d9318a Extracting [==========================> ] 28.97MB/55.45MB 280250a24ab7 Downloading [=================================================> ] 56.23MB/57.36MB 0f4bc59b85b3 Downloading [====> ] 4.062MB/50.17MB 10ac4908093d Extracting [===============================> ] 19.33MB/30.43MB 286d4cd18b47 Extracting [=========================================> ] 148.7MB/180.3MB 3595edb9cc0c Extracting [===========================================> ] 84.67MB/98.32MB 280250a24ab7 Verifying Checksum 280250a24ab7 Download complete c919ef978278 Downloading [============> ] 3.01kB/11.92kB c919ef978278 Downloading [==================================================>] 11.92kB/11.92kB c919ef978278 Verifying Checksum c919ef978278 Download complete c6f70df7a645 Extracting [================================================> ] 3.08MB/3.162MB 0bfd3aaf5d6c Pull complete 56de912e3e14 Downloading [==================================================>] 1.225kB/1.225kB 56de912e3e14 Verifying Checksum 56de912e3e14 Download complete 1b30b2d9318a Extracting [============================> ] 31.75MB/55.45MB 0f4bc59b85b3 Downloading [=======> ] 7.11MB/50.17MB 5fe13c15ba37 Extracting [==================================================>] 13.9kB/13.9kB 5fe13c15ba37 Extracting [==================================================>] 13.9kB/13.9kB 10ac4908093d Extracting [===================================> ] 21.63MB/30.43MB 3595edb9cc0c Extracting [==============================================> ] 90.8MB/98.32MB 286d4cd18b47 Extracting [=========================================> ] 151MB/180.3MB c6f70df7a645 Extracting [==================================================>] 3.162MB/3.162MB 1b30b2d9318a Extracting [===============================> ] 34.54MB/55.45MB 0f4bc59b85b3 Downloading [============> ] 12.19MB/50.17MB 10ac4908093d Extracting [======================================> ] 23.27MB/30.43MB 22ebf0e44c85 Downloading [> ] 380.1kB/37.02MB 22ebf0e44c85 Downloading [> ] 380.1kB/37.02MB 3595edb9cc0c Extracting [================================================> ] 94.7MB/98.32MB 1b30b2d9318a Extracting [==================================> ] 37.88MB/55.45MB 286d4cd18b47 Extracting [==========================================> ] 152.6MB/180.3MB 0f4bc59b85b3 Downloading [==================> ] 18.79MB/50.17MB 10ac4908093d Extracting [========================================> ] 24.58MB/30.43MB 3595edb9cc0c Extracting [==================================================>] 98.32MB/98.32MB 22ebf0e44c85 Downloading [==========> ] 7.559MB/37.02MB 22ebf0e44c85 Downloading [==========> ] 7.559MB/37.02MB c6f70df7a645 Pull complete 7bb182c52b4c Extracting [> ] 65.54kB/4.333MB 3595edb9cc0c Pull complete 1b30b2d9318a Extracting [=========================================> ] 45.68MB/55.45MB ffe123cfbf03 Extracting [==================================================>] 1.297kB/1.297kB ffe123cfbf03 Extracting [==================================================>] 1.297kB/1.297kB 5fe13c15ba37 Pull complete faaa56d14bf4 Extracting [==================================================>] 13.78kB/13.78kB faaa56d14bf4 Extracting [==================================================>] 13.78kB/13.78kB 0f4bc59b85b3 Downloading [============================> ] 28.95MB/50.17MB 286d4cd18b47 Extracting [===========================================> ] 156MB/180.3MB 22ebf0e44c85 Downloading [=========================> ] 18.53MB/37.02MB 22ebf0e44c85 Downloading [=========================> ] 18.53MB/37.02MB 10ac4908093d Extracting [===========================================> ] 26.54MB/30.43MB 7bb182c52b4c Extracting [===> ] 262.1kB/4.333MB 1b30b2d9318a Extracting [==============================================> ] 51.25MB/55.45MB 0f4bc59b85b3 Downloading [=======================================> ] 40.12MB/50.17MB 286d4cd18b47 Extracting [===========================================> ] 158.2MB/180.3MB 22ebf0e44c85 Downloading [=======================================> ] 29.49MB/37.02MB 22ebf0e44c85 Downloading [=======================================> ] 29.49MB/37.02MB 10ac4908093d Extracting [=============================================> ] 27.85MB/30.43MB 0f4bc59b85b3 Verifying Checksum 0f4bc59b85b3 Download complete 7bb182c52b4c Extracting [===========================> ] 2.425MB/4.333MB 1b30b2d9318a Extracting [=================================================> ] 55.15MB/55.45MB 22ebf0e44c85 Verifying Checksum 22ebf0e44c85 Download complete 22ebf0e44c85 Download complete 286d4cd18b47 Extracting [============================================> ] 161MB/180.3MB ffe123cfbf03 Pull complete faaa56d14bf4 Pull complete 7bb182c52b4c Extracting [================================================> ] 4.194MB/4.333MB 7bb182c52b4c Extracting [==================================================>] 4.333MB/4.333MB 10ac4908093d Extracting [===============================================> ] 29.16MB/30.43MB 1b30b2d9318a Extracting [==================================================>] 55.45MB/55.45MB 286d4cd18b47 Extracting [============================================> ] 162.1MB/180.3MB 6b11e56702ad Downloading [> ] 77.31kB/7.707MB 6b11e56702ad Downloading [> ] 77.31kB/7.707MB 00b33c871d26 Downloading [> ] 539.9kB/253.3MB 00b33c871d26 Downloading [> ] 539.9kB/253.3MB f56fa6e6c695 Extracting [==================================================>] 2.239kB/2.239kB f56fa6e6c695 Extracting [==================================================>] 2.239kB/2.239kB 22ebf0e44c85 Extracting [> ] 393.2kB/37.02MB 22ebf0e44c85 Extracting [> ] 393.2kB/37.02MB 286d4cd18b47 Extracting [=============================================> ] 163.2MB/180.3MB 6b11e56702ad Downloading [====================> ] 3.111MB/7.707MB 6b11e56702ad Downloading [====================> ] 3.111MB/7.707MB 00b33c871d26 Downloading [==> ] 10.72MB/253.3MB 00b33c871d26 Downloading [==> ] 10.72MB/253.3MB pap Pulled 7bb182c52b4c Pull complete 10ac4908093d Extracting [=================================================> ] 30.15MB/30.43MB 22ebf0e44c85 Extracting [====> ] 3.539MB/37.02MB 22ebf0e44c85 Extracting [====> ] 3.539MB/37.02MB 6b11e56702ad Downloading [==============================> ] 4.75MB/7.707MB 6b11e56702ad Downloading [==============================> ] 4.75MB/7.707MB 00b33c871d26 Downloading [====> ] 21.43MB/253.3MB 00b33c871d26 Downloading [====> ] 21.43MB/253.3MB 1b30b2d9318a Pull complete 17afd0a389e3 Extracting [==================================> ] 32.77kB/47.11kB 17afd0a389e3 Extracting [==================================================>] 47.11kB/47.11kB 6b11e56702ad Download complete 6b11e56702ad Download complete 22ebf0e44c85 Extracting [======> ] 4.719MB/37.02MB 22ebf0e44c85 Extracting [======> ] 4.719MB/37.02MB 286d4cd18b47 Extracting [=============================================> ] 164.9MB/180.3MB 00b33c871d26 Downloading [======> ] 30.54MB/253.3MB 00b33c871d26 Downloading [======> ] 30.54MB/253.3MB 10ac4908093d Extracting [==================================================>] 30.43MB/30.43MB f56fa6e6c695 Pull complete edfefb18be4b Extracting [==================================================>] 2.24kB/2.24kB edfefb18be4b Extracting [==================================================>] 2.24kB/2.24kB 286d4cd18b47 Extracting [==============================================> ] 168.2MB/180.3MB 00b33c871d26 Downloading [========> ] 43.94MB/253.3MB 00b33c871d26 Downloading [========> ] 43.94MB/253.3MB f6d077cd6629 Extracting [> ] 524.3kB/50.34MB 22ebf0e44c85 Extracting [=========> ] 6.685MB/37.02MB 22ebf0e44c85 Extracting [=========> ] 6.685MB/37.02MB 53d69aa7d3fc Downloading [=> ] 718B/19.96kB 53d69aa7d3fc Verifying Checksum 53d69aa7d3fc Download complete 53d69aa7d3fc Downloading [=> ] 718B/19.96kB 53d69aa7d3fc Verifying Checksum 53d69aa7d3fc Download complete 10ac4908093d Pull complete 17afd0a389e3 Pull complete 44779101e748 Extracting [==================================================>] 1.744kB/1.744kB 286d4cd18b47 Extracting [===============================================> ] 169.9MB/180.3MB 44779101e748 Extracting [==================================================>] 1.744kB/1.744kB 00b33c871d26 Downloading [==========> ] 54.12MB/253.3MB 00b33c871d26 Downloading [==========> ] 54.12MB/253.3MB 93a6eb5bf657 Extracting [==================================================>] 23.29kB/23.29kB 93a6eb5bf657 Extracting [==================================================>] 23.29kB/23.29kB f6d077cd6629 Extracting [====> ] 4.194MB/50.34MB edfefb18be4b Pull complete 22ebf0e44c85 Extracting [============> ] 9.437MB/37.02MB 22ebf0e44c85 Extracting [============> ] 9.437MB/37.02MB 00b33c871d26 Downloading [===========> ] 59.5MB/253.3MB 00b33c871d26 Downloading [===========> ] 59.5MB/253.3MB f6d077cd6629 Extracting [=====> ] 5.243MB/50.34MB a3ab11953ef9 Downloading [> ] 407.8kB/39.52MB a3ab11953ef9 Downloading [> ] 407.8kB/39.52MB 22ebf0e44c85 Extracting [=============> ] 10.22MB/37.02MB 22ebf0e44c85 Extracting [=============> ] 10.22MB/37.02MB 286d4cd18b47 Extracting [===============================================> ] 171MB/180.3MB baa7adcb99b8 Downloading [================================================> ] 154.1MB/159.1MB policy-db-migrator Pulled 00b33c871d26 Downloading [============> ] 65.42MB/253.3MB 00b33c871d26 Downloading [============> ] 65.42MB/253.3MB f6d077cd6629 Extracting [======> ] 6.291MB/50.34MB baa7adcb99b8 Verifying Checksum baa7adcb99b8 Download complete 22ebf0e44c85 Extracting [================> ] 12.19MB/37.02MB 22ebf0e44c85 Extracting [================> ] 12.19MB/37.02MB a3ab11953ef9 Downloading [====> ] 3.635MB/39.52MB a3ab11953ef9 Downloading [====> ] 3.635MB/39.52MB 93a6eb5bf657 Pull complete 44779101e748 Pull complete a721db3e3f3d Extracting [> ] 65.54kB/5.526MB 00b33c871d26 Downloading [==============> ] 75.07MB/253.3MB 00b33c871d26 Downloading [==============> ] 75.07MB/253.3MB 286d4cd18b47 Extracting [===============================================> ] 172.7MB/180.3MB 22ebf0e44c85 Extracting [==================> ] 13.76MB/37.02MB 22ebf0e44c85 Extracting [==================> ] 13.76MB/37.02MB a3ab11953ef9 Downloading [=========> ] 7.699MB/39.52MB a3ab11953ef9 Downloading [=========> ] 7.699MB/39.52MB 91ef9543149d Downloading [================================> ] 720B/1.101kB 91ef9543149d Downloading [================================> ] 720B/1.101kB 91ef9543149d Downloading [==================================================>] 1.101kB/1.101kB 91ef9543149d Verifying Checksum 91ef9543149d Download complete 91ef9543149d Downloading [==================================================>] 1.101kB/1.101kB 91ef9543149d Verifying Checksum 91ef9543149d Download complete 00b33c871d26 Downloading [================> ] 82.55MB/253.3MB 00b33c871d26 Downloading [================> ] 82.55MB/253.3MB 286d4cd18b47 Extracting [================================================> ] 173.2MB/180.3MB f6d077cd6629 Extracting [========> ] 8.913MB/50.34MB baa7adcb99b8 Extracting [> ] 557.1kB/159.1MB a3ab11953ef9 Downloading [=============> ] 10.53MB/39.52MB a3ab11953ef9 Downloading [=============> ] 10.53MB/39.52MB 22ebf0e44c85 Extracting [====================> ] 15.34MB/37.02MB 22ebf0e44c85 Extracting [====================> ] 15.34MB/37.02MB 00b33c871d26 Downloading [=================> ] 88.43MB/253.3MB 00b33c871d26 Downloading [=================> ] 88.43MB/253.3MB a721db3e3f3d Extracting [==> ] 262.1kB/5.526MB 2ec4f59af178 Downloading [========================================> ] 720B/881B 2ec4f59af178 Downloading [========================================> ] 720B/881B 2ec4f59af178 Verifying Checksum 2ec4f59af178 Downloading [==================================================>] 881B/881B 2ec4f59af178 Download complete 286d4cd18b47 Extracting [================================================> ] 173.8MB/180.3MB f6d077cd6629 Extracting [==========> ] 10.49MB/50.34MB 280250a24ab7 Extracting [> ] 557.1kB/57.36MB a3ab11953ef9 Downloading [=====================> ] 17.04MB/39.52MB a3ab11953ef9 Downloading [=====================> ] 17.04MB/39.52MB baa7adcb99b8 Extracting [=> ] 3.899MB/159.1MB 00b33c871d26 Downloading [==================> ] 95.38MB/253.3MB 00b33c871d26 Downloading [==================> ] 95.38MB/253.3MB a721db3e3f3d Extracting [=============> ] 1.442MB/5.526MB 22ebf0e44c85 Extracting [======================> ] 16.52MB/37.02MB 22ebf0e44c85 Extracting [======================> ] 16.52MB/37.02MB f6d077cd6629 Extracting [===========> ] 11.53MB/50.34MB 286d4cd18b47 Extracting [================================================> ] 174.9MB/180.3MB 8b7e81cd5ef1 Downloading [==================================================>] 131B/131B 8b7e81cd5ef1 Downloading [==================================================>] 131B/131B 8b7e81cd5ef1 Verifying Checksum 8b7e81cd5ef1 Download complete 8b7e81cd5ef1 Verifying Checksum 8b7e81cd5ef1 Download complete a3ab11953ef9 Downloading [=============================> ] 23.49MB/39.52MB a3ab11953ef9 Downloading [=============================> ] 23.49MB/39.52MB baa7adcb99b8 Extracting [==> ] 8.913MB/159.1MB 280250a24ab7 Extracting [=> ] 1.671MB/57.36MB a721db3e3f3d Extracting [========================> ] 2.687MB/5.526MB 00b33c871d26 Downloading [===================> ] 100.2MB/253.3MB 00b33c871d26 Downloading [===================> ] 100.2MB/253.3MB 22ebf0e44c85 Extracting [========================> ] 18.09MB/37.02MB 22ebf0e44c85 Extracting [========================> ] 18.09MB/37.02MB 286d4cd18b47 Extracting [================================================> ] 176MB/180.3MB a3ab11953ef9 Downloading [======================================> ] 30.78MB/39.52MB a3ab11953ef9 Downloading [======================================> ] 30.78MB/39.52MB f6d077cd6629 Extracting [=============> ] 13.11MB/50.34MB baa7adcb99b8 Extracting [====> ] 13.37MB/159.1MB c52916c1316e Downloading [==================================================>] 171B/171B c52916c1316e Downloading [==================================================>] 171B/171B a721db3e3f3d Extracting [===================================> ] 3.932MB/5.526MB c52916c1316e Verifying Checksum c52916c1316e Verifying Checksum c52916c1316e Download complete c52916c1316e Download complete 00b33c871d26 Downloading [====================> ] 106.1MB/253.3MB 00b33c871d26 Downloading [====================> ] 106.1MB/253.3MB 22ebf0e44c85 Extracting [==========================> ] 19.66MB/37.02MB 22ebf0e44c85 Extracting [==========================> ] 19.66MB/37.02MB 280250a24ab7 Extracting [==> ] 2.785MB/57.36MB a3ab11953ef9 Downloading [=============================================> ] 35.63MB/39.52MB a3ab11953ef9 Downloading [=============================================> ] 35.63MB/39.52MB f6d077cd6629 Extracting [==============> ] 14.16MB/50.34MB baa7adcb99b8 Extracting [=====> ] 16.15MB/159.1MB a721db3e3f3d Extracting [========================================> ] 4.456MB/5.526MB 22ebf0e44c85 Extracting [============================> ] 20.84MB/37.02MB 22ebf0e44c85 Extracting [============================> ] 20.84MB/37.02MB a3ab11953ef9 Verifying Checksum a3ab11953ef9 Verifying Checksum a3ab11953ef9 Download complete a3ab11953ef9 Download complete 00b33c871d26 Downloading [=====================> ] 110.9MB/253.3MB 00b33c871d26 Downloading [=====================> ] 110.9MB/253.3MB 286d4cd18b47 Extracting [=================================================> ] 177.1MB/180.3MB 280250a24ab7 Extracting [===> ] 3.899MB/57.36MB f6d077cd6629 Extracting [===============> ] 15.2MB/50.34MB 7a1cb9ad7f75 Downloading [> ] 535.8kB/115.2MB baa7adcb99b8 Extracting [======> ] 19.5MB/159.1MB 00b33c871d26 Downloading [======================> ] 116.2MB/253.3MB 00b33c871d26 Downloading [======================> ] 116.2MB/253.3MB 22ebf0e44c85 Extracting [===============================> ] 23.59MB/37.02MB 22ebf0e44c85 Extracting [===============================> ] 23.59MB/37.02MB a721db3e3f3d Extracting [=========================================> ] 4.588MB/5.526MB 280250a24ab7 Extracting [====> ] 5.571MB/57.36MB 0a92c7dea7af Downloading [==========> ] 719B/3.449kB 0a92c7dea7af Downloading [==================================================>] 3.449kB/3.449kB 0a92c7dea7af Verifying Checksum 0a92c7dea7af Download complete 7a1cb9ad7f75 Downloading [==> ] 6.433MB/115.2MB f6d077cd6629 Extracting [================> ] 16.25MB/50.34MB baa7adcb99b8 Extracting [=======> ] 22.84MB/159.1MB 286d4cd18b47 Extracting [=================================================> ] 178.3MB/180.3MB 00b33c871d26 Downloading [========================> ] 122.1MB/253.3MB 00b33c871d26 Downloading [========================> ] 122.1MB/253.3MB 22ebf0e44c85 Extracting [=================================> ] 25.17MB/37.02MB 22ebf0e44c85 Extracting [=================================> ] 25.17MB/37.02MB a721db3e3f3d Extracting [==========================================> ] 4.719MB/5.526MB 280250a24ab7 Extracting [======> ] 7.242MB/57.36MB 7a1cb9ad7f75 Downloading [====> ] 11.25MB/115.2MB f6d077cd6629 Extracting [=================> ] 17.3MB/50.34MB 00b33c871d26 Downloading [========================> ] 126.4MB/253.3MB 00b33c871d26 Downloading [========================> ] 126.4MB/253.3MB 286d4cd18b47 Extracting [=================================================> ] 178.8MB/180.3MB 22ebf0e44c85 Extracting [===================================> ] 26.35MB/37.02MB 22ebf0e44c85 Extracting [===================================> ] 26.35MB/37.02MB baa7adcb99b8 Extracting [========> ] 26.74MB/159.1MB d93f69e96600 Downloading [> ] 540.7kB/115.2MB 280250a24ab7 Extracting [=======> ] 8.356MB/57.36MB a721db3e3f3d Extracting [===========================================> ] 4.85MB/5.526MB 7a1cb9ad7f75 Downloading [========> ] 19.29MB/115.2MB 00b33c871d26 Downloading [=========================> ] 131.2MB/253.3MB 00b33c871d26 Downloading [=========================> ] 131.2MB/253.3MB 286d4cd18b47 Extracting [=================================================> ] 179.4MB/180.3MB baa7adcb99b8 Extracting [=========> ] 29.52MB/159.1MB 22ebf0e44c85 Extracting [=====================================> ] 27.92MB/37.02MB 22ebf0e44c85 Extracting [=====================================> ] 27.92MB/37.02MB f6d077cd6629 Extracting [=================> ] 17.83MB/50.34MB d93f69e96600 Downloading [===> ] 8.571MB/115.2MB a721db3e3f3d Extracting [=================================================> ] 5.505MB/5.526MB a721db3e3f3d Extracting [==================================================>] 5.526MB/5.526MB 7a1cb9ad7f75 Downloading [==========> ] 24.65MB/115.2MB 00b33c871d26 Downloading [===========================> ] 140.9MB/253.3MB 00b33c871d26 Downloading [===========================> ] 140.9MB/253.3MB baa7adcb99b8 Extracting [=========> ] 31.75MB/159.1MB 22ebf0e44c85 Extracting [=======================================> ] 29.49MB/37.02MB 22ebf0e44c85 Extracting [=======================================> ] 29.49MB/37.02MB d93f69e96600 Downloading [========> ] 20.37MB/115.2MB 280250a24ab7 Extracting [========> ] 9.47MB/57.36MB f6d077cd6629 Extracting [==================> ] 18.35MB/50.34MB 7a1cb9ad7f75 Downloading [==============> ] 33.25MB/115.2MB 00b33c871d26 Downloading [=============================> ] 147.3MB/253.3MB 00b33c871d26 Downloading [=============================> ] 147.3MB/253.3MB 286d4cd18b47 Extracting [==================================================>] 180.3MB/180.3MB baa7adcb99b8 Extracting [==========> ] 34.54MB/159.1MB a721db3e3f3d Pull complete 1850a929b84a Extracting [==================================================>] 149B/149B 1850a929b84a Extracting [==================================================>] 149B/149B 22ebf0e44c85 Extracting [===========================================> ] 31.85MB/37.02MB 22ebf0e44c85 Extracting [===========================================> ] 31.85MB/37.02MB d93f69e96600 Downloading [============> ] 28.42MB/115.2MB f6d077cd6629 Extracting [===================> ] 19.92MB/50.34MB 7a1cb9ad7f75 Downloading [==================> ] 42.39MB/115.2MB 280250a24ab7 Extracting [=========> ] 10.58MB/57.36MB 00b33c871d26 Downloading [==============================> ] 155.9MB/253.3MB 00b33c871d26 Downloading [==============================> ] 155.9MB/253.3MB baa7adcb99b8 Extracting [============> ] 39.55MB/159.1MB 7a1cb9ad7f75 Downloading [=====================> ] 49.92MB/115.2MB d93f69e96600 Downloading [================> ] 37.51MB/115.2MB f6d077cd6629 Extracting [=====================> ] 22.02MB/50.34MB 286d4cd18b47 Pull complete 78a2c6c1f043 Extracting [===================> ] 32.77kB/84.13kB 78a2c6c1f043 Extracting [==================================================>] 84.13kB/84.13kB 78a2c6c1f043 Extracting [==================================================>] 84.13kB/84.13kB 00b33c871d26 Downloading [===============================> ] 160.7MB/253.3MB 00b33c871d26 Downloading [===============================> ] 160.7MB/253.3MB 280250a24ab7 Extracting [=========> ] 11.14MB/57.36MB baa7adcb99b8 Extracting [=============> ] 44.01MB/159.1MB 22ebf0e44c85 Extracting [=============================================> ] 33.82MB/37.02MB 22ebf0e44c85 Extracting [=============================================> ] 33.82MB/37.02MB 7a1cb9ad7f75 Downloading [==========================> ] 61.76MB/115.2MB d93f69e96600 Downloading [====================> ] 48.22MB/115.2MB 1850a929b84a Pull complete 397a918c7da3 Extracting [==================================================>] 327B/327B 397a918c7da3 Extracting [==================================================>] 327B/327B f6d077cd6629 Extracting [========================> ] 24.64MB/50.34MB 00b33c871d26 Downloading [=================================> ] 172MB/253.3MB 00b33c871d26 Downloading [=================================> ] 172MB/253.3MB 22ebf0e44c85 Extracting [===============================================> ] 35MB/37.02MB 22ebf0e44c85 Extracting [===============================================> ] 35MB/37.02MB 280250a24ab7 Extracting [============> ] 14.48MB/57.36MB baa7adcb99b8 Extracting [===============> ] 49.02MB/159.1MB 78a2c6c1f043 Pull complete 4a2329ce180a Extracting [==================================================>] 92B/92B 4a2329ce180a Extracting [==================================================>] 92B/92B 7a1cb9ad7f75 Downloading [===============================> ] 73.01MB/115.2MB d93f69e96600 Downloading [==========================> ] 60.56MB/115.2MB f6d077cd6629 Extracting [=============================> ] 29.88MB/50.34MB 00b33c871d26 Downloading [===================================> ] 181.7MB/253.3MB 00b33c871d26 Downloading [===================================> ] 181.7MB/253.3MB 22ebf0e44c85 Extracting [================================================> ] 36.18MB/37.02MB 22ebf0e44c85 Extracting [================================================> ] 36.18MB/37.02MB baa7adcb99b8 Extracting [================> ] 53.48MB/159.1MB 280250a24ab7 Extracting [==============> ] 16.71MB/57.36MB 22ebf0e44c85 Extracting [==================================================>] 37.02MB/37.02MB 22ebf0e44c85 Extracting [==================================================>] 37.02MB/37.02MB 7a1cb9ad7f75 Downloading [===================================> ] 81.57MB/115.2MB d93f69e96600 Downloading [============================> ] 65.88MB/115.2MB f6d077cd6629 Extracting [===================================> ] 35.65MB/50.34MB 397a918c7da3 Pull complete 00b33c871d26 Downloading [=====================================> ] 189.2MB/253.3MB 00b33c871d26 Downloading [=====================================> ] 189.2MB/253.3MB baa7adcb99b8 Extracting [==================> ] 59.05MB/159.1MB 280250a24ab7 Extracting [=================> ] 20.05MB/57.36MB 7a1cb9ad7f75 Downloading [========================================> ] 92.83MB/115.2MB d93f69e96600 Downloading [=================================> ] 76.62MB/115.2MB f6d077cd6629 Extracting [=============================================> ] 45.61MB/50.34MB 00b33c871d26 Downloading [=======================================> ] 199.4MB/253.3MB 00b33c871d26 Downloading [=======================================> ] 199.4MB/253.3MB baa7adcb99b8 Extracting [====================> ] 66.29MB/159.1MB 280250a24ab7 Extracting [===================> ] 22.84MB/57.36MB 7a1cb9ad7f75 Downloading [===========================================> ] 100.3MB/115.2MB d93f69e96600 Downloading [====================================> ] 85.2MB/115.2MB 00b33c871d26 Downloading [========================================> ] 205.3MB/253.3MB 00b33c871d26 Downloading [========================================> ] 205.3MB/253.3MB 22ebf0e44c85 Pull complete 22ebf0e44c85 Pull complete 7a1cb9ad7f75 Downloading [===========================================> ] 100.8MB/115.2MB 4a2329ce180a Pull complete d93f69e96600 Downloading [=====================================> ] 86.8MB/115.2MB baa7adcb99b8 Extracting [=====================> ] 68.52MB/159.1MB 69a5193d8be5 Extracting [==================================================>] 92B/92B 69a5193d8be5 Extracting [==================================================>] 92B/92B 280250a24ab7 Extracting [====================> ] 23.4MB/57.36MB f6d077cd6629 Extracting [=================================================> ] 49.81MB/50.34MB 00b33c871d26 Downloading [=========================================> ] 209.6MB/253.3MB 00b33c871d26 Downloading [=========================================> ] 209.6MB/253.3MB 7a1cb9ad7f75 Downloading [=============================================> ] 105.7MB/115.2MB baa7adcb99b8 Extracting [======================> ] 70.75MB/159.1MB d93f69e96600 Downloading [=======================================> ] 91.11MB/115.2MB 280250a24ab7 Extracting [====================> ] 23.95MB/57.36MB 00b33c871d26 Downloading [==========================================> ] 216.1MB/253.3MB 00b33c871d26 Downloading [==========================================> ] 216.1MB/253.3MB 7a1cb9ad7f75 Downloading [================================================> ] 111.6MB/115.2MB 806be17e856d Extracting [> ] 557.1kB/89.72MB d93f69e96600 Downloading [==========================================> ] 97MB/115.2MB baa7adcb99b8 Extracting [=======================> ] 73.53MB/159.1MB 280250a24ab7 Extracting [======================> ] 25.62MB/57.36MB 7a1cb9ad7f75 Verifying Checksum 7a1cb9ad7f75 Download complete f6d077cd6629 Extracting [=================================================> ] 50.33MB/50.34MB f6d077cd6629 Extracting [==================================================>] 50.34MB/50.34MB 69a5193d8be5 Pull complete 24c3bee7923c Extracting [==================================================>] 299B/299B 24c3bee7923c Extracting [==================================================>] 299B/299B 00b33c871d26 Downloading [============================================> ] 226.3MB/253.3MB 00b33c871d26 Downloading [============================================> ] 226.3MB/253.3MB 806be17e856d Extracting [=> ] 2.785MB/89.72MB f6d077cd6629 Pull complete d6c6c26dc98a Extracting [==================================================>] 605B/605B d93f69e96600 Downloading [===============================================> ] 108.8MB/115.2MB baa7adcb99b8 Extracting [=========================> ] 79.66MB/159.1MB d6c6c26dc98a Extracting [==================================================>] 605B/605B 280250a24ab7 Extracting [=========================> ] 28.97MB/57.36MB bbb9d15c45a1 Downloading [=========> ] 720B/3.633kB bbb9d15c45a1 Download complete 00b33c871d26 Downloading [==============================================> ] 238.1MB/253.3MB 00b33c871d26 Downloading [==============================================> ] 238.1MB/253.3MB d93f69e96600 Verifying Checksum d93f69e96600 Download complete 806be17e856d Extracting [==> ] 4.456MB/89.72MB baa7adcb99b8 Extracting [==========================> ] 84.12MB/159.1MB 24c3bee7923c Pull complete 00b33c871d26 Downloading [================================================> ] 246.7MB/253.3MB 00b33c871d26 Downloading [================================================> ] 246.7MB/253.3MB 280250a24ab7 Extracting [===========================> ] 31.75MB/57.36MB 806be17e856d Extracting [===> ] 6.128MB/89.72MB baa7adcb99b8 Extracting [============================> ] 89.69MB/159.1MB d6c6c26dc98a Pull complete 00b33c871d26 Verifying Checksum 00b33c871d26 Download complete 00b33c871d26 Verifying Checksum 00b33c871d26 Download complete 60290e82ca2c Extracting [==================================================>] 2.679kB/2.679kB 60290e82ca2c Extracting [==================================================>] 2.679kB/2.679kB 280250a24ab7 Extracting [=============================> ] 33.98MB/57.36MB baa7adcb99b8 Extracting [=============================> ] 95.26MB/159.1MB 806be17e856d Extracting [====> ] 7.799MB/89.72MB 280250a24ab7 Extracting [===============================> ] 35.65MB/57.36MB baa7adcb99b8 Extracting [===============================> ] 99.16MB/159.1MB 60290e82ca2c Pull complete 806be17e856d Extracting [=====> ] 9.47MB/89.72MB 78605ea207be Extracting [==================================================>] 3.089kB/3.089kB 78605ea207be Extracting [==================================================>] 3.089kB/3.089kB 04266f1fe01b Extracting [> ] 557.1kB/246.5MB 00b33c871d26 Extracting [> ] 557.1kB/253.3MB 00b33c871d26 Extracting [> ] 557.1kB/253.3MB baa7adcb99b8 Extracting [================================> ] 103.6MB/159.1MB 280250a24ab7 Extracting [=================================> ] 37.88MB/57.36MB 806be17e856d Extracting [======> ] 11.14MB/89.72MB 00b33c871d26 Extracting [> ] 5.014MB/253.3MB 00b33c871d26 Extracting [> ] 5.014MB/253.3MB baa7adcb99b8 Extracting [=================================> ] 107.5MB/159.1MB 280250a24ab7 Extracting [=================================> ] 38.99MB/57.36MB 00b33c871d26 Extracting [=> ] 9.47MB/253.3MB 00b33c871d26 Extracting [=> ] 9.47MB/253.3MB 806be17e856d Extracting [=======> ] 13.37MB/89.72MB 04266f1fe01b Extracting [> ] 1.114MB/246.5MB 78605ea207be Pull complete 869e11012e0e Extracting [==================================================>] 4.023kB/4.023kB 869e11012e0e Extracting [==================================================>] 4.023kB/4.023kB baa7adcb99b8 Extracting [===================================> ] 113.1MB/159.1MB 280250a24ab7 Extracting [===================================> ] 40.67MB/57.36MB 00b33c871d26 Extracting [==> ] 13.37MB/253.3MB 00b33c871d26 Extracting [==> ] 13.37MB/253.3MB 04266f1fe01b Extracting [=> ] 5.014MB/246.5MB 806be17e856d Extracting [========> ] 15.6MB/89.72MB baa7adcb99b8 Extracting [=====================================> ] 120.9MB/159.1MB 00b33c871d26 Extracting [===> ] 19.5MB/253.3MB 00b33c871d26 Extracting [===> ] 19.5MB/253.3MB 04266f1fe01b Extracting [==> ] 11.14MB/246.5MB 280250a24ab7 Extracting [=====================================> ] 42.89MB/57.36MB 806be17e856d Extracting [=========> ] 17.83MB/89.72MB baa7adcb99b8 Extracting [=======================================> ] 124.8MB/159.1MB 04266f1fe01b Extracting [===> ] 17.27MB/246.5MB 00b33c871d26 Extracting [====> ] 25.07MB/253.3MB 00b33c871d26 Extracting [====> ] 25.07MB/253.3MB 280250a24ab7 Extracting [=======================================> ] 45.12MB/57.36MB 869e11012e0e Pull complete c4426427fcc3 Extracting [==================================================>] 1.44kB/1.44kB c4426427fcc3 Extracting [==================================================>] 1.44kB/1.44kB baa7adcb99b8 Extracting [========================================> ] 129.8MB/159.1MB 806be17e856d Extracting [===========> ] 20.61MB/89.72MB 04266f1fe01b Extracting [====> ] 22.28MB/246.5MB 00b33c871d26 Extracting [=====> ] 27.3MB/253.3MB 00b33c871d26 Extracting [=====> ] 27.3MB/253.3MB 280250a24ab7 Extracting [========================================> ] 46.79MB/57.36MB baa7adcb99b8 Extracting [==========================================> ] 135.4MB/159.1MB 806be17e856d Extracting [============> ] 22.28MB/89.72MB 04266f1fe01b Extracting [=====> ] 25.62MB/246.5MB 280250a24ab7 Extracting [==========================================> ] 48.46MB/57.36MB baa7adcb99b8 Extracting [============================================> ] 142MB/159.1MB c4426427fcc3 Pull complete 806be17e856d Extracting [=============> ] 23.95MB/89.72MB d247d9811eae Extracting [===========> ] 32.77kB/139.8kB d247d9811eae Extracting [==================================================>] 139.8kB/139.8kB d247d9811eae Extracting [==================================================>] 139.8kB/139.8kB 04266f1fe01b Extracting [=====> ] 27.85MB/246.5MB 00b33c871d26 Extracting [=====> ] 28.97MB/253.3MB 00b33c871d26 Extracting [=====> ] 28.97MB/253.3MB 280250a24ab7 Extracting [===========================================> ] 50.14MB/57.36MB 04266f1fe01b Extracting [======> ] 31.2MB/246.5MB baa7adcb99b8 Extracting [==============================================> ] 147.1MB/159.1MB 806be17e856d Extracting [==============> ] 25.62MB/89.72MB 00b33c871d26 Extracting [======> ] 32.31MB/253.3MB 00b33c871d26 Extracting [======> ] 32.31MB/253.3MB 280250a24ab7 Extracting [=============================================> ] 52.36MB/57.36MB 04266f1fe01b Extracting [=======> ] 37.32MB/246.5MB baa7adcb99b8 Extracting [================================================> ] 153.2MB/159.1MB 806be17e856d Extracting [==============> ] 26.74MB/89.72MB 00b33c871d26 Extracting [======> ] 35.09MB/253.3MB 00b33c871d26 Extracting [======> ] 35.09MB/253.3MB 280250a24ab7 Extracting [===============================================> ] 54.03MB/57.36MB 04266f1fe01b Extracting [========> ] 42.89MB/246.5MB baa7adcb99b8 Extracting [=================================================> ] 158.8MB/159.1MB baa7adcb99b8 Extracting [==================================================>] 159.1MB/159.1MB 00b33c871d26 Extracting [=======> ] 38.44MB/253.3MB 00b33c871d26 Extracting [=======> ] 38.44MB/253.3MB 806be17e856d Extracting [===============> ] 28.41MB/89.72MB 04266f1fe01b Extracting [==========> ] 51.81MB/246.5MB 280250a24ab7 Extracting [================================================> ] 55.15MB/57.36MB 00b33c871d26 Extracting [=========> ] 48.46MB/253.3MB 00b33c871d26 Extracting [=========> ] 48.46MB/253.3MB 806be17e856d Extracting [=================> ] 30.64MB/89.72MB 04266f1fe01b Extracting [============> ] 60.72MB/246.5MB d247d9811eae Pull complete f1fb904ca1b9 Extracting [==================================================>] 100B/100B baa7adcb99b8 Pull complete f1fb904ca1b9 Extracting [==================================================>] 100B/100B eda20f6c55e6 Extracting [==================================================>] 1.152kB/1.152kB eda20f6c55e6 Extracting [==================================================>] 1.152kB/1.152kB 280250a24ab7 Extracting [=================================================> ] 56.82MB/57.36MB 00b33c871d26 Extracting [==========> ] 54.59MB/253.3MB 00b33c871d26 Extracting [==========> ] 54.59MB/253.3MB 280250a24ab7 Extracting [==================================================>] 57.36MB/57.36MB 280250a24ab7 Extracting [==================================================>] 57.36MB/57.36MB 280250a24ab7 Pull complete 806be17e856d Extracting [==================> ] 32.87MB/89.72MB 04266f1fe01b Extracting [==============> ] 70.19MB/246.5MB f1fb904ca1b9 Pull complete 00b33c871d26 Extracting [============> ] 62.39MB/253.3MB 00b33c871d26 Extracting [============> ] 62.39MB/253.3MB 1e12dd793eba Extracting [==================================================>] 721B/721B 1e12dd793eba Extracting [==================================================>] 721B/721B eda20f6c55e6 Pull complete 699508c11178 Extracting [==================================================>] 1.123kB/1.123kB 699508c11178 Extracting [==================================================>] 1.123kB/1.123kB 806be17e856d Extracting [===================> ] 34.54MB/89.72MB 04266f1fe01b Extracting [===============> ] 78.54MB/246.5MB 00b33c871d26 Extracting [=============> ] 69.07MB/253.3MB 00b33c871d26 Extracting [=============> ] 69.07MB/253.3MB 0f4bc59b85b3 Extracting [> ] 524.3kB/50.17MB 806be17e856d Extracting [====================> ] 37.32MB/89.72MB 04266f1fe01b Extracting [=================> ] 85.23MB/246.5MB 00b33c871d26 Extracting [==============> ] 74.65MB/253.3MB 00b33c871d26 Extracting [==============> ] 74.65MB/253.3MB 699508c11178 Pull complete 1e12dd793eba Pull complete simulator Pulled prometheus Pulled 04266f1fe01b Extracting [=================> ] 88.57MB/246.5MB 806be17e856d Extracting [=====================> ] 38.44MB/89.72MB 00b33c871d26 Extracting [================> ] 81.89MB/253.3MB 00b33c871d26 Extracting [================> ] 81.89MB/253.3MB 0f4bc59b85b3 Extracting [=> ] 1.049MB/50.17MB 04266f1fe01b Extracting [===================> ] 96.37MB/246.5MB 806be17e856d Extracting [======================> ] 41.22MB/89.72MB 00b33c871d26 Extracting [=================> ] 88.57MB/253.3MB 00b33c871d26 Extracting [=================> ] 88.57MB/253.3MB 0f4bc59b85b3 Extracting [=> ] 1.573MB/50.17MB 04266f1fe01b Extracting [====================> ] 100.8MB/246.5MB 806be17e856d Extracting [=======================> ] 42.89MB/89.72MB 00b33c871d26 Extracting [==================> ] 95.81MB/253.3MB 00b33c871d26 Extracting [==================> ] 95.81MB/253.3MB 04266f1fe01b Extracting [=====================> ] 105.3MB/246.5MB 0f4bc59b85b3 Extracting [==> ] 2.097MB/50.17MB 806be17e856d Extracting [========================> ] 44.56MB/89.72MB 00b33c871d26 Extracting [====================> ] 101.4MB/253.3MB 00b33c871d26 Extracting [====================> ] 101.4MB/253.3MB 04266f1fe01b Extracting [======================> ] 111.4MB/246.5MB 806be17e856d Extracting [=========================> ] 46.24MB/89.72MB 00b33c871d26 Extracting [====================> ] 104.7MB/253.3MB 00b33c871d26 Extracting [====================> ] 104.7MB/253.3MB 04266f1fe01b Extracting [========================> ] 119.2MB/246.5MB 0f4bc59b85b3 Extracting [===> ] 3.67MB/50.17MB 806be17e856d Extracting [============================> ] 50.69MB/89.72MB 04266f1fe01b Extracting [========================> ] 121.4MB/246.5MB 00b33c871d26 Extracting [=====================> ] 107MB/253.3MB 00b33c871d26 Extracting [=====================> ] 107MB/253.3MB 806be17e856d Extracting [=============================> ] 52.92MB/89.72MB 0f4bc59b85b3 Extracting [====> ] 4.194MB/50.17MB 04266f1fe01b Extracting [=========================> ] 127.6MB/246.5MB 00b33c871d26 Extracting [=====================> ] 110.3MB/253.3MB 00b33c871d26 Extracting [=====================> ] 110.3MB/253.3MB 806be17e856d Extracting [===============================> ] 55.71MB/89.72MB 04266f1fe01b Extracting [===========================> ] 133.1MB/246.5MB 00b33c871d26 Extracting [======================> ] 114.8MB/253.3MB 00b33c871d26 Extracting [======================> ] 114.8MB/253.3MB 0f4bc59b85b3 Extracting [======> ] 6.816MB/50.17MB 04266f1fe01b Extracting [============================> ] 138.1MB/246.5MB 806be17e856d Extracting [================================> ] 58.49MB/89.72MB 00b33c871d26 Extracting [=======================> ] 117MB/253.3MB 00b33c871d26 Extracting [=======================> ] 117MB/253.3MB 0f4bc59b85b3 Extracting [=======> ] 7.34MB/50.17MB 04266f1fe01b Extracting [============================> ] 142.6MB/246.5MB 00b33c871d26 Extracting [=======================> ] 118.7MB/253.3MB 00b33c871d26 Extracting [=======================> ] 118.7MB/253.3MB 806be17e856d Extracting [==================================> ] 61.83MB/89.72MB 04266f1fe01b Extracting [=============================> ] 146.5MB/246.5MB 0f4bc59b85b3 Extracting [=======> ] 7.864MB/50.17MB 806be17e856d Extracting [====================================> ] 65.73MB/89.72MB 00b33c871d26 Extracting [=======================> ] 121.4MB/253.3MB 00b33c871d26 Extracting [=======================> ] 121.4MB/253.3MB 04266f1fe01b Extracting [===============================> ] 153.2MB/246.5MB 0f4bc59b85b3 Extracting [========> ] 8.913MB/50.17MB 00b33c871d26 Extracting [========================> ] 124.2MB/253.3MB 00b33c871d26 Extracting [========================> ] 124.2MB/253.3MB 806be17e856d Extracting [=====================================> ] 67.96MB/89.72MB 04266f1fe01b Extracting [================================> ] 159.3MB/246.5MB 0f4bc59b85b3 Extracting [=========> ] 9.961MB/50.17MB 806be17e856d Extracting [=======================================> ] 70.19MB/89.72MB 04266f1fe01b Extracting [=================================> ] 166MB/246.5MB 00b33c871d26 Extracting [=========================> ] 127.6MB/253.3MB 00b33c871d26 Extracting [=========================> ] 127.6MB/253.3MB 0f4bc59b85b3 Extracting [===========> ] 11.53MB/50.17MB 806be17e856d Extracting [========================================> ] 71.86MB/89.72MB 04266f1fe01b Extracting [==================================> ] 172.1MB/246.5MB 00b33c871d26 Extracting [=========================> ] 131.5MB/253.3MB 00b33c871d26 Extracting [=========================> ] 131.5MB/253.3MB 0f4bc59b85b3 Extracting [=============> ] 13.63MB/50.17MB 04266f1fe01b Extracting [===================================> ] 176.6MB/246.5MB 806be17e856d Extracting [========================================> ] 72.97MB/89.72MB 00b33c871d26 Extracting [==========================> ] 135.4MB/253.3MB 00b33c871d26 Extracting [==========================> ] 135.4MB/253.3MB 0f4bc59b85b3 Extracting [===============> ] 15.2MB/50.17MB 04266f1fe01b Extracting [=====================================> ] 182.7MB/246.5MB 806be17e856d Extracting [=========================================> ] 75.2MB/89.72MB 00b33c871d26 Extracting [===========================> ] 140.4MB/253.3MB 00b33c871d26 Extracting [===========================> ] 140.4MB/253.3MB 0f4bc59b85b3 Extracting [================> ] 16.78MB/50.17MB 04266f1fe01b Extracting [======================================> ] 191.6MB/246.5MB 806be17e856d Extracting [============================================> ] 79.1MB/89.72MB 00b33c871d26 Extracting [============================> ] 144.3MB/253.3MB 00b33c871d26 Extracting [============================> ] 144.3MB/253.3MB 0f4bc59b85b3 Extracting [==================> ] 18.87MB/50.17MB 04266f1fe01b Extracting [========================================> ] 200.5MB/246.5MB 806be17e856d Extracting [=============================================> ] 82.44MB/89.72MB 00b33c871d26 Extracting [=============================> ] 148.2MB/253.3MB 00b33c871d26 Extracting [=============================> ] 148.2MB/253.3MB 0f4bc59b85b3 Extracting [====================> ] 20.45MB/50.17MB 04266f1fe01b Extracting [=========================================> ] 206.7MB/246.5MB 806be17e856d Extracting [==============================================> ] 83.56MB/89.72MB 00b33c871d26 Extracting [=============================> ] 149.8MB/253.3MB 00b33c871d26 Extracting [=============================> ] 149.8MB/253.3MB 0f4bc59b85b3 Extracting [======================> ] 22.54MB/50.17MB 04266f1fe01b Extracting [==========================================> ] 211.1MB/246.5MB 806be17e856d Extracting [===============================================> ] 84.67MB/89.72MB 00b33c871d26 Extracting [==============================> ] 152.1MB/253.3MB 00b33c871d26 Extracting [==============================> ] 152.1MB/253.3MB 0f4bc59b85b3 Extracting [=========================> ] 25.17MB/50.17MB 04266f1fe01b Extracting [============================================> ] 217.3MB/246.5MB 806be17e856d Extracting [================================================> ] 86.34MB/89.72MB 00b33c871d26 Extracting [==============================> ] 154.9MB/253.3MB 00b33c871d26 Extracting [==============================> ] 154.9MB/253.3MB 0f4bc59b85b3 Extracting [===========================> ] 27.26MB/50.17MB 04266f1fe01b Extracting [=============================================> ] 223.4MB/246.5MB 806be17e856d Extracting [=================================================> ] 88.57MB/89.72MB 00b33c871d26 Extracting [===============================> ] 157.6MB/253.3MB 00b33c871d26 Extracting [===============================> ] 157.6MB/253.3MB 04266f1fe01b Extracting [=============================================> ] 226.7MB/246.5MB 0f4bc59b85b3 Extracting [=============================> ] 29.36MB/50.17MB 806be17e856d Extracting [=================================================> ] 89.13MB/89.72MB 04266f1fe01b Extracting [===============================================> ] 234.5MB/246.5MB 00b33c871d26 Extracting [===============================> ] 160.4MB/253.3MB 00b33c871d26 Extracting [===============================> ] 160.4MB/253.3MB 0f4bc59b85b3 Extracting [===============================> ] 31.46MB/50.17MB 806be17e856d Extracting [==================================================>] 89.72MB/89.72MB 04266f1fe01b Extracting [================================================> ] 241.2MB/246.5MB 00b33c871d26 Extracting [================================> ] 162.7MB/253.3MB 00b33c871d26 Extracting [================================> ] 162.7MB/253.3MB 0f4bc59b85b3 Extracting [=================================> ] 34.08MB/50.17MB 04266f1fe01b Extracting [==================================================>] 246.5MB/246.5MB 00b33c871d26 Extracting [================================> ] 166.6MB/253.3MB 00b33c871d26 Extracting [================================> ] 166.6MB/253.3MB 00b33c871d26 Extracting [=================================> ] 169.3MB/253.3MB 00b33c871d26 Extracting [=================================> ] 169.3MB/253.3MB 0f4bc59b85b3 Extracting [======================================> ] 38.8MB/50.17MB 00b33c871d26 Extracting [=================================> ] 169.9MB/253.3MB 00b33c871d26 Extracting [=================================> ] 169.9MB/253.3MB 0f4bc59b85b3 Extracting [========================================> ] 40.89MB/50.17MB 00b33c871d26 Extracting [=================================> ] 171.6MB/253.3MB 00b33c871d26 Extracting [=================================> ] 171.6MB/253.3MB 0f4bc59b85b3 Extracting [===========================================> ] 43.52MB/50.17MB 00b33c871d26 Extracting [==================================> ] 173.2MB/253.3MB 00b33c871d26 Extracting [==================================> ] 173.2MB/253.3MB 0f4bc59b85b3 Extracting [============================================> ] 45.09MB/50.17MB 0f4bc59b85b3 Extracting [===============================================> ] 47.71MB/50.17MB 00b33c871d26 Extracting [==================================> ] 174.9MB/253.3MB 00b33c871d26 Extracting [==================================> ] 174.9MB/253.3MB 00b33c871d26 Extracting [==================================> ] 175.5MB/253.3MB 00b33c871d26 Extracting [==================================> ] 175.5MB/253.3MB 0f4bc59b85b3 Extracting [================================================> ] 48.76MB/50.17MB 00b33c871d26 Extracting [==================================> ] 176MB/253.3MB 00b33c871d26 Extracting [==================================> ] 176MB/253.3MB 0f4bc59b85b3 Extracting [=================================================> ] 49.28MB/50.17MB 0f4bc59b85b3 Extracting [==================================================>] 50.17MB/50.17MB 00b33c871d26 Extracting [===================================> ] 180.5MB/253.3MB 00b33c871d26 Extracting [===================================> ] 180.5MB/253.3MB 00b33c871d26 Extracting [====================================> ] 185.5MB/253.3MB 00b33c871d26 Extracting [====================================> ] 185.5MB/253.3MB 00b33c871d26 Extracting [=====================================> ] 187.7MB/253.3MB 00b33c871d26 Extracting [=====================================> ] 187.7MB/253.3MB 00b33c871d26 Extracting [=====================================> ] 188.8MB/253.3MB 00b33c871d26 Extracting [=====================================> ] 188.8MB/253.3MB 806be17e856d Pull complete 04266f1fe01b Pull complete 0f4bc59b85b3 Pull complete 00b33c871d26 Extracting [=====================================> ] 191.1MB/253.3MB 00b33c871d26 Extracting [=====================================> ] 191.1MB/253.3MB c919ef978278 Extracting [==================================================>] 11.92kB/11.92kB c919ef978278 Extracting [==================================================>] 11.92kB/11.92kB 634de6c90876 Extracting [==================================================>] 3.49kB/3.49kB apex-pdp Pulled 00b33c871d26 Extracting [======================================> ] 192.7MB/253.3MB 00b33c871d26 Extracting [======================================> ] 192.7MB/253.3MB c919ef978278 Pull complete 634de6c90876 Pull complete cd00854cfb1a Extracting [==================================================>] 6.971kB/6.971kB cd00854cfb1a Extracting [==================================================>] 6.971kB/6.971kB 56de912e3e14 Extracting [==================================================>] 1.225kB/1.225kB 56de912e3e14 Extracting [==================================================>] 1.225kB/1.225kB 00b33c871d26 Extracting [======================================> ] 194.4MB/253.3MB 00b33c871d26 Extracting [======================================> ] 194.4MB/253.3MB 00b33c871d26 Extracting [======================================> ] 196.1MB/253.3MB 00b33c871d26 Extracting [======================================> ] 196.1MB/253.3MB 56de912e3e14 Pull complete cd00854cfb1a Pull complete grafana Pulled mariadb Pulled 00b33c871d26 Extracting [=======================================> ] 198.3MB/253.3MB 00b33c871d26 Extracting [=======================================> ] 198.3MB/253.3MB 00b33c871d26 Extracting [=======================================> ] 200MB/253.3MB 00b33c871d26 Extracting [=======================================> ] 200MB/253.3MB 00b33c871d26 Extracting [========================================> ] 202.8MB/253.3MB 00b33c871d26 Extracting [========================================> ] 202.8MB/253.3MB 00b33c871d26 Extracting [========================================> ] 205MB/253.3MB 00b33c871d26 Extracting [========================================> ] 205MB/253.3MB 00b33c871d26 Extracting [========================================> ] 207.2MB/253.3MB 00b33c871d26 Extracting [========================================> ] 207.2MB/253.3MB 00b33c871d26 Extracting [=========================================> ] 211.7MB/253.3MB 00b33c871d26 Extracting [=========================================> ] 211.7MB/253.3MB 00b33c871d26 Extracting [==========================================> ] 214.5MB/253.3MB 00b33c871d26 Extracting [==========================================> ] 214.5MB/253.3MB 00b33c871d26 Extracting [==========================================> ] 217.8MB/253.3MB 00b33c871d26 Extracting [==========================================> ] 217.8MB/253.3MB 00b33c871d26 Extracting [===========================================> ] 219.5MB/253.3MB 00b33c871d26 Extracting [===========================================> ] 219.5MB/253.3MB 00b33c871d26 Extracting [===========================================> ] 221.2MB/253.3MB 00b33c871d26 Extracting [===========================================> ] 221.2MB/253.3MB 00b33c871d26 Extracting [============================================> ] 223.9MB/253.3MB 00b33c871d26 Extracting [============================================> ] 223.9MB/253.3MB 00b33c871d26 Extracting [=============================================> ] 229MB/253.3MB 00b33c871d26 Extracting [=============================================> ] 229MB/253.3MB 00b33c871d26 Extracting [=============================================> ] 232.3MB/253.3MB 00b33c871d26 Extracting [=============================================> ] 232.3MB/253.3MB 00b33c871d26 Extracting [==============================================> ] 236.2MB/253.3MB 00b33c871d26 Extracting [==============================================> ] 236.2MB/253.3MB 00b33c871d26 Extracting [===============================================> ] 241.2MB/253.3MB 00b33c871d26 Extracting [===============================================> ] 241.2MB/253.3MB 00b33c871d26 Extracting [===============================================> ] 241.8MB/253.3MB 00b33c871d26 Extracting [===============================================> ] 241.8MB/253.3MB 00b33c871d26 Extracting [=================================================> ] 249MB/253.3MB 00b33c871d26 Extracting [=================================================> ] 249MB/253.3MB 00b33c871d26 Extracting [=================================================> ] 252.3MB/253.3MB 00b33c871d26 Extracting [=================================================> ] 252.3MB/253.3MB 00b33c871d26 Extracting [==================================================>] 253.3MB/253.3MB 00b33c871d26 Extracting [==================================================>] 253.3MB/253.3MB 00b33c871d26 Pull complete 00b33c871d26 Pull complete 6b11e56702ad Extracting [> ] 98.3kB/7.707MB 6b11e56702ad Extracting [> ] 98.3kB/7.707MB 6b11e56702ad Extracting [===========================> ] 4.227MB/7.707MB 6b11e56702ad Extracting [===========================> ] 4.227MB/7.707MB 6b11e56702ad Extracting [==================================================>] 7.707MB/7.707MB 6b11e56702ad Extracting [==================================================>] 7.707MB/7.707MB 6b11e56702ad Pull complete 6b11e56702ad Pull complete 53d69aa7d3fc Extracting [==================================================>] 19.96kB/19.96kB 53d69aa7d3fc Extracting [==================================================>] 19.96kB/19.96kB 53d69aa7d3fc Extracting [==================================================>] 19.96kB/19.96kB 53d69aa7d3fc Extracting [==================================================>] 19.96kB/19.96kB 53d69aa7d3fc Pull complete 53d69aa7d3fc Pull complete a3ab11953ef9 Extracting [> ] 426kB/39.52MB a3ab11953ef9 Extracting [> ] 426kB/39.52MB a3ab11953ef9 Extracting [================> ] 13.21MB/39.52MB a3ab11953ef9 Extracting [================> ] 13.21MB/39.52MB a3ab11953ef9 Extracting [==============================> ] 23.86MB/39.52MB a3ab11953ef9 Extracting [==============================> ] 23.86MB/39.52MB a3ab11953ef9 Extracting [==============================================> ] 37.06MB/39.52MB a3ab11953ef9 Extracting [==============================================> ] 37.06MB/39.52MB a3ab11953ef9 Extracting [==================================================>] 39.52MB/39.52MB a3ab11953ef9 Extracting [==================================================>] 39.52MB/39.52MB a3ab11953ef9 Pull complete a3ab11953ef9 Pull complete 91ef9543149d Extracting [==================================================>] 1.101kB/1.101kB 91ef9543149d Extracting [==================================================>] 1.101kB/1.101kB 91ef9543149d Extracting [==================================================>] 1.101kB/1.101kB 91ef9543149d Extracting [==================================================>] 1.101kB/1.101kB 91ef9543149d Pull complete 91ef9543149d Pull complete 2ec4f59af178 Extracting [==================================================>] 881B/881B 2ec4f59af178 Extracting [==================================================>] 881B/881B 2ec4f59af178 Extracting [==================================================>] 881B/881B 2ec4f59af178 Extracting [==================================================>] 881B/881B 2ec4f59af178 Pull complete 2ec4f59af178 Pull complete 8b7e81cd5ef1 Extracting [==================================================>] 131B/131B 8b7e81cd5ef1 Extracting [==================================================>] 131B/131B 8b7e81cd5ef1 Extracting [==================================================>] 131B/131B 8b7e81cd5ef1 Extracting [==================================================>] 131B/131B 8b7e81cd5ef1 Pull complete 8b7e81cd5ef1 Pull complete c52916c1316e Extracting [==================================================>] 171B/171B c52916c1316e Extracting [==================================================>] 171B/171B c52916c1316e Extracting [==================================================>] 171B/171B c52916c1316e Extracting [==================================================>] 171B/171B c52916c1316e Pull complete c52916c1316e Pull complete d93f69e96600 Extracting [> ] 557.1kB/115.2MB 7a1cb9ad7f75 Extracting [> ] 557.1kB/115.2MB d93f69e96600 Extracting [====> ] 9.47MB/115.2MB 7a1cb9ad7f75 Extracting [=====> ] 13.37MB/115.2MB d93f69e96600 Extracting [=========> ] 22.28MB/115.2MB 7a1cb9ad7f75 Extracting [============> ] 28.41MB/115.2MB d93f69e96600 Extracting [==============> ] 32.87MB/115.2MB 7a1cb9ad7f75 Extracting [================> ] 38.99MB/115.2MB d93f69e96600 Extracting [====================> ] 46.24MB/115.2MB 7a1cb9ad7f75 Extracting [=======================> ] 54.03MB/115.2MB d93f69e96600 Extracting [=========================> ] 59.6MB/115.2MB 7a1cb9ad7f75 Extracting [=============================> ] 67.96MB/115.2MB d93f69e96600 Extracting [================================> ] 74.65MB/115.2MB 7a1cb9ad7f75 Extracting [===================================> ] 82.44MB/115.2MB d93f69e96600 Extracting [======================================> ] 88.57MB/115.2MB 7a1cb9ad7f75 Extracting [=========================================> ] 96.37MB/115.2MB d93f69e96600 Extracting [===========================================> ] 99.71MB/115.2MB 7a1cb9ad7f75 Extracting [================================================> ] 110.9MB/115.2MB d93f69e96600 Extracting [================================================> ] 110.9MB/115.2MB 7a1cb9ad7f75 Extracting [==================================================>] 115.2MB/115.2MB 7a1cb9ad7f75 Pull complete 0a92c7dea7af Extracting [==================================================>] 3.449kB/3.449kB 0a92c7dea7af Extracting [==================================================>] 3.449kB/3.449kB d93f69e96600 Extracting [==================================================>] 115.2MB/115.2MB d93f69e96600 Pull complete bbb9d15c45a1 Extracting [==================================================>] 3.633kB/3.633kB 0a92c7dea7af Pull complete bbb9d15c45a1 Extracting [==================================================>] 3.633kB/3.633kB zookeeper Pulled bbb9d15c45a1 Pull complete kafka Pulled Network compose_default Creating Network compose_default Created Container zookeeper Creating Container mariadb Creating Container prometheus Creating Container simulator Creating Container prometheus Created Container mariadb Created Container grafana Creating Container policy-db-migrator Creating Container zookeeper Created Container kafka Creating Container simulator Created Container grafana Created Container policy-db-migrator Created Container policy-api Creating Container kafka Created Container policy-api Created Container policy-pap Creating Container policy-pap Created Container policy-apex-pdp Creating Container policy-apex-pdp Created Container simulator Starting Container zookeeper Starting Container prometheus Starting Container mariadb Starting Container mariadb Started Container policy-db-migrator Starting Container simulator Started Container zookeeper Started Container kafka Starting Container prometheus Started Container grafana Starting Container policy-db-migrator Started Container policy-api Starting Container policy-api Started Container kafka Started Container policy-pap Starting Container policy-pap Started Container policy-apex-pdp Starting Container policy-apex-pdp Started Container grafana Started Prometheus server: http://localhost:30259 Grafana server: http://localhost:30269 Waiting for REST to come up on localhost port 30003... NAMES STATUS policy-apex-pdp Up 11 seconds policy-pap Up 12 seconds policy-api Up 13 seconds kafka Up 13 seconds grafana Up 10 seconds prometheus Up 15 seconds zookeeper Up 16 seconds mariadb Up 18 seconds simulator Up 17 seconds NAMES STATUS policy-apex-pdp Up 16 seconds policy-pap Up 17 seconds policy-api Up 18 seconds kafka Up 18 seconds grafana Up 15 seconds prometheus Up 20 seconds zookeeper Up 21 seconds mariadb Up 23 seconds simulator Up 22 seconds NAMES STATUS policy-apex-pdp Up 21 seconds policy-pap Up 22 seconds policy-api Up 23 seconds kafka Up 23 seconds grafana Up 20 seconds prometheus Up 25 seconds zookeeper Up 26 seconds mariadb Up 28 seconds simulator Up 27 seconds NAMES STATUS policy-apex-pdp Up 26 seconds policy-pap Up 27 seconds policy-api Up 28 seconds kafka Up 28 seconds grafana Up 25 seconds prometheus Up 30 seconds zookeeper Up 31 seconds mariadb Up 33 seconds simulator Up 32 seconds NAMES STATUS policy-apex-pdp Up 31 seconds policy-pap Up 32 seconds policy-api Up 33 seconds kafka Up 33 seconds grafana Up 30 seconds prometheus Up 35 seconds zookeeper Up 36 seconds mariadb Up 38 seconds simulator Up 37 seconds NAMES STATUS policy-apex-pdp Up 36 seconds policy-pap Up 37 seconds policy-api Up 38 seconds kafka Up 38 seconds grafana Up 35 seconds prometheus Up 40 seconds zookeeper Up 41 seconds mariadb Up 43 seconds simulator Up 42 seconds Build docker image for robot framework Error: No such image: policy-csit-robot Cloning into '/w/workspace/policy-pap-newdelhi-project-csit-pap/csit/resources/tests/models'... Build robot framework docker image Sending build context to Docker daemon 16.15MB Step 1/9 : FROM nexus3.onap.org:10001/library/python:3.10-slim-bullseye 3.10-slim-bullseye: Pulling from library/python 728328ac3bde: Pulling fs layer 1b1ca9b4dc3e: Pulling fs layer 87fd8cb1268a: Pulling fs layer bc8f89fb7e32: Pulling fs layer 91dc9fb1162f: Pulling fs layer bc8f89fb7e32: Waiting 91dc9fb1162f: Waiting 1b1ca9b4dc3e: Download complete bc8f89fb7e32: Verifying Checksum bc8f89fb7e32: Download complete 91dc9fb1162f: Verifying Checksum 91dc9fb1162f: Download complete 87fd8cb1268a: Verifying Checksum 87fd8cb1268a: Download complete 728328ac3bde: Verifying Checksum 728328ac3bde: Download complete 728328ac3bde: Pull complete 1b1ca9b4dc3e: Pull complete 87fd8cb1268a: Pull complete bc8f89fb7e32: Pull complete 91dc9fb1162f: Pull complete Digest: sha256:9745f361fffc367922210f2de48a58f44782ebf5a7375195e91ebd5b3ce5a8ff Status: Downloaded newer image for nexus3.onap.org:10001/library/python:3.10-slim-bullseye ---> 585a36762ff2 Step 2/9 : ARG CSIT_SCRIPT=${CSIT_SCRIPT} ---> Running in ad8b63124f68 Removing intermediate container ad8b63124f68 ---> c73f873b3e93 Step 3/9 : ARG ROBOT_FILE=${ROBOT_FILE} ---> Running in fd5437a285c1 Removing intermediate container fd5437a285c1 ---> fed84b27766e Step 4/9 : ENV ROBOT_WORKSPACE=/opt/robotworkspace ROBOT_FILE=$ROBOT_FILE CLAMP_K8S_TEST=$CLAMP_K8S_TEST ---> Running in 6b4c37586c95 Removing intermediate container 6b4c37586c95 ---> c5ecd836001f Step 5/9 : RUN python3 -m pip -qq install --upgrade pip && python3 -m pip -qq install --upgrade --extra-index-url="https://nexus3.onap.org/repository/PyPi.staging/simple" 'robotframework-onap==0.6.0.*' --pre && python3 -m pip -qq install --upgrade confluent-kafka && python3 -m pip freeze ---> Running in f12e7bd67220 bcrypt==4.1.3 certifi==2024.2.2 cffi==1.16.0 charset-normalizer==3.3.2 confluent-kafka==2.4.0 cryptography==42.0.7 decorator==5.1.1 deepdiff==7.0.1 dnspython==2.6.1 future==1.0.0 idna==3.7 Jinja2==3.1.4 jsonpath-rw==1.4.0 kafka-python==2.0.2 MarkupSafe==2.1.5 more-itertools==5.0.0 ordered-set==4.1.0 paramiko==3.4.0 pbr==6.0.0 ply==3.11 protobuf==5.27.0 pycparser==2.22 PyNaCl==1.5.0 PyYAML==6.0.1 requests==2.32.2 robotframework==7.0 robotframework-onap==0.6.0.dev105 robotframework-requests==1.0a10 robotlibcore-temp==1.0.2 six==1.16.0 urllib3==2.2.1 Removing intermediate container f12e7bd67220 ---> d84c8f1ad7b8 Step 6/9 : RUN mkdir -p ${ROBOT_WORKSPACE} ---> Running in d60422d7dcb3 Removing intermediate container d60422d7dcb3 ---> 237ba48d59ed Step 7/9 : COPY scripts/run-test.sh tests/ ${ROBOT_WORKSPACE}/ ---> c1af833181c1 Step 8/9 : WORKDIR ${ROBOT_WORKSPACE} ---> Running in f73daf819126 Removing intermediate container f73daf819126 ---> fe8131c72ea0 Step 9/9 : CMD ["sh", "-c", "./run-test.sh" ] ---> Running in 075b7c09b897 Removing intermediate container 075b7c09b897 ---> 449f83dbab53 Successfully built 449f83dbab53 Successfully tagged policy-csit-robot:latest top - 17:03:47 up 4 min, 0 users, load average: 3.83, 2.08, 0.85 Tasks: 209 total, 1 running, 131 sleeping, 0 stopped, 0 zombie %Cpu(s): 14.8 us, 4.6 sy, 0.0 ni, 74.2 id, 6.3 wa, 0.0 hi, 0.1 si, 0.1 st total used free shared buff/cache available Mem: 31G 2.9G 22G 1.3M 6.1G 28G Swap: 1.0G 0B 1.0G NAMES STATUS policy-apex-pdp Up About a minute policy-pap Up About a minute policy-api Up About a minute kafka Up About a minute grafana Up About a minute prometheus Up About a minute zookeeper Up About a minute mariadb Up About a minute simulator Up About a minute CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 2d5a41c7b73a policy-apex-pdp 1.85% 170.5MiB / 31.41GiB 0.53% 28.1kB / 31.8kB 0B / 0B 49 9e84281c1cba policy-pap 1.32% 514.5MiB / 31.41GiB 1.60% 117kB / 143kB 0B / 149MB 63 763519c6da31 policy-api 0.49% 653.2MiB / 31.41GiB 2.03% 989kB / 673kB 0B / 0B 54 4a8ce89b42ba kafka 3.43% 390.5MiB / 31.41GiB 1.21% 144kB / 142kB 0B / 545kB 85 f635ff596bec grafana 0.05% 59.57MiB / 31.41GiB 0.19% 19.1kB / 3.5kB 0B / 25.4MB 16 06c95a19a44c prometheus 0.00% 20.05MiB / 31.41GiB 0.06% 59kB / 2.26kB 0B / 0B 13 82b12113e6b6 zookeeper 0.11% 99.26MiB / 31.41GiB 0.31% 61.7kB / 54.1kB 225kB / 451kB 60 610843ad7ede mariadb 0.03% 102.2MiB / 31.41GiB 0.32% 969kB / 1.22MB 11MB / 72.2MB 29 441e3d9ed07b simulator 0.10% 122.9MiB / 31.41GiB 0.38% 1.61kB / 0B 0B / 0B 77 Container policy-csit Creating Container policy-csit Created Attaching to policy-csit policy-csit | Invoking the robot tests from: pap-test.robot pap-slas.robot policy-csit | Run Robot test policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates policy-csit | -v POLICY_API_IP:policy-api:6969 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 policy-csit | -v APEX_IP:policy-apex-pdp:6969 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 policy-csit | -v KAFKA_IP:kafka:9092 policy-csit | -v PROMETHEUS_IP:prometheus:9090 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 policy-csit | -v TEMP_FOLDER:/tmp/distribution policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 policy-csit | -v CLAMP_K8S_TEST: policy-csit | Starting Robot test suites ... policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas.Pap-Test policy-csit | ============================================================================== policy-csit | LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | LoadNodeTemplates :: Create node templates in database using speci... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Healthcheck :: Verify policy pap health check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Consolidated Healthcheck :: Verify policy consolidated health check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Metrics :: Verify policy pap is exporting prometheus metrics | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | DeployPdpGroups :: Deploy policies in PdpGroups | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Pap-Test & Pap-Slas.Pap-Test | PASS | policy-csit | 22 tests, 22 passed, 0 failed policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas.Pap-Slas policy-csit | ============================================================================== policy-csit | WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Pap-Test & Pap-Slas.Pap-Slas | PASS | policy-csit | 8 tests, 8 passed, 0 failed policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas | PASS | policy-csit | 30 tests, 30 passed, 0 failed policy-csit | ============================================================================== policy-csit | Output: /tmp/results/output.xml policy-csit | Log: /tmp/results/log.html policy-csit | Report: /tmp/results/report.html policy-csit | RESULT: 0 policy-csit exited with code 0 NAMES STATUS policy-apex-pdp Up 3 minutes policy-pap Up 3 minutes policy-api Up 3 minutes kafka Up 3 minutes grafana Up 3 minutes prometheus Up 3 minutes zookeeper Up 3 minutes mariadb Up 3 minutes simulator Up 3 minutes Shut down started! Collecting logs from docker compose containers... ======== Logs from grafana ======== grafana | logger=settings t=2024-05-23T17:02:35.365235455Z level=info msg="Starting Grafana" version=10.4.3 commit=0bfd547800e6eb79dc98e55844ba28194b3df002 branch=v10.4.x compiled=2024-05-23T17:02:35Z grafana | logger=settings t=2024-05-23T17:02:35.366181593Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini grafana | logger=settings t=2024-05-23T17:02:35.366197133Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini grafana | logger=settings t=2024-05-23T17:02:35.366202063Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" grafana | logger=settings t=2024-05-23T17:02:35.366210633Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" grafana | logger=settings t=2024-05-23T17:02:35.366216343Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" grafana | logger=settings t=2024-05-23T17:02:35.366312884Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" grafana | logger=settings t=2024-05-23T17:02:35.366319354Z level=info msg="Config overridden from command line" arg="default.log.mode=console" grafana | logger=settings t=2024-05-23T17:02:35.366324034Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" grafana | logger=settings t=2024-05-23T17:02:35.366331934Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" grafana | logger=settings t=2024-05-23T17:02:35.366335764Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" grafana | logger=settings t=2024-05-23T17:02:35.366346224Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" grafana | logger=settings t=2024-05-23T17:02:35.366351114Z level=info msg=Target target=[all] grafana | logger=settings t=2024-05-23T17:02:35.366395615Z level=info msg="Path Home" path=/usr/share/grafana grafana | logger=settings t=2024-05-23T17:02:35.366441795Z level=info msg="Path Data" path=/var/lib/grafana grafana | logger=settings t=2024-05-23T17:02:35.366446985Z level=info msg="Path Logs" path=/var/log/grafana grafana | logger=settings t=2024-05-23T17:02:35.366514116Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins grafana | logger=settings t=2024-05-23T17:02:35.366520326Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning grafana | logger=settings t=2024-05-23T17:02:35.366524966Z level=info msg="App mode production" grafana | logger=sqlstore t=2024-05-23T17:02:35.366922649Z level=info msg="Connecting to DB" dbtype=sqlite3 grafana | logger=sqlstore t=2024-05-23T17:02:35.36695173Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db grafana | logger=migrator t=2024-05-23T17:02:35.367841148Z level=info msg="Starting DB migrations" grafana | logger=migrator t=2024-05-23T17:02:35.368862537Z level=info msg="Executing migration" id="create migration_log table" grafana | logger=migrator t=2024-05-23T17:02:35.369936326Z level=info msg="Migration successfully executed" id="create migration_log table" duration=1.065429ms grafana | logger=migrator t=2024-05-23T17:02:35.376918137Z level=info msg="Executing migration" id="create user table" grafana | logger=migrator t=2024-05-23T17:02:35.377967067Z level=info msg="Migration successfully executed" id="create user table" duration=1.04731ms grafana | logger=migrator t=2024-05-23T17:02:35.382973681Z level=info msg="Executing migration" id="add unique index user.login" grafana | logger=migrator t=2024-05-23T17:02:35.383825168Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=851.277µs grafana | logger=migrator t=2024-05-23T17:02:35.388760082Z level=info msg="Executing migration" id="add unique index user.email" grafana | logger=migrator t=2024-05-23T17:02:35.389655319Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=895.877µs grafana | logger=migrator t=2024-05-23T17:02:35.393772016Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" grafana | logger=migrator t=2024-05-23T17:02:35.394616473Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=844.557µs grafana | logger=migrator t=2024-05-23T17:02:35.398431017Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" grafana | logger=migrator t=2024-05-23T17:02:35.399200584Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=769.887µs grafana | logger=migrator t=2024-05-23T17:02:35.458545746Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" grafana | logger=migrator t=2024-05-23T17:02:35.462840124Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=4.294388ms grafana | logger=migrator t=2024-05-23T17:02:35.468252812Z level=info msg="Executing migration" id="create user table v2" grafana | logger=migrator t=2024-05-23T17:02:35.469556903Z level=info msg="Migration successfully executed" id="create user table v2" duration=1.300201ms grafana | logger=migrator t=2024-05-23T17:02:35.473075984Z level=info msg="Executing migration" id="create index UQE_user_login - v2" grafana | logger=migrator t=2024-05-23T17:02:35.473925332Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=848.848µs grafana | logger=migrator t=2024-05-23T17:02:35.478145079Z level=info msg="Executing migration" id="create index UQE_user_email - v2" grafana | logger=migrator t=2024-05-23T17:02:35.478937976Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=792.947µs grafana | logger=migrator t=2024-05-23T17:02:35.483790018Z level=info msg="Executing migration" id="copy data_source v1 to v2" grafana | logger=migrator t=2024-05-23T17:02:35.484338153Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=543.925µs grafana | logger=migrator t=2024-05-23T17:02:35.48736494Z level=info msg="Executing migration" id="Drop old table user_v1" grafana | logger=migrator t=2024-05-23T17:02:35.488017836Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=652.546µs grafana | logger=migrator t=2024-05-23T17:02:35.494286651Z level=info msg="Executing migration" id="Add column help_flags1 to user table" grafana | logger=migrator t=2024-05-23T17:02:35.495903025Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.615624ms grafana | logger=migrator t=2024-05-23T17:02:35.501456794Z level=info msg="Executing migration" id="Update user table charset" grafana | logger=migrator t=2024-05-23T17:02:35.501482375Z level=info msg="Migration successfully executed" id="Update user table charset" duration=26.601µs grafana | logger=migrator t=2024-05-23T17:02:35.504037537Z level=info msg="Executing migration" id="Add last_seen_at column to user" grafana | logger=migrator t=2024-05-23T17:02:35.505271318Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.232971ms grafana | logger=migrator t=2024-05-23T17:02:35.508675818Z level=info msg="Executing migration" id="Add missing user data" grafana | logger=migrator t=2024-05-23T17:02:35.509138842Z level=info msg="Migration successfully executed" id="Add missing user data" duration=455.604µs grafana | logger=migrator t=2024-05-23T17:02:35.512870285Z level=info msg="Executing migration" id="Add is_disabled column to user" grafana | logger=migrator t=2024-05-23T17:02:35.514733261Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.863177ms grafana | logger=migrator t=2024-05-23T17:02:35.520509182Z level=info msg="Executing migration" id="Add index user.login/user.email" grafana | logger=migrator t=2024-05-23T17:02:35.521562501Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=1.052049ms grafana | logger=migrator t=2024-05-23T17:02:35.525008622Z level=info msg="Executing migration" id="Add is_service_account column to user" grafana | logger=migrator t=2024-05-23T17:02:35.526128641Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.120569ms grafana | logger=migrator t=2024-05-23T17:02:35.531274297Z level=info msg="Executing migration" id="Update is_service_account column to nullable" grafana | logger=migrator t=2024-05-23T17:02:35.539917543Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=8.639176ms grafana | logger=migrator t=2024-05-23T17:02:35.565549259Z level=info msg="Executing migration" id="Add uid column to user" grafana | logger=migrator t=2024-05-23T17:02:35.567788368Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=2.238539ms grafana | logger=migrator t=2024-05-23T17:02:35.57364161Z level=info msg="Executing migration" id="Update uid column values for users" grafana | logger=migrator t=2024-05-23T17:02:35.573870312Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=229.512µs grafana | logger=migrator t=2024-05-23T17:02:35.576387044Z level=info msg="Executing migration" id="Add unique index user_uid" grafana | logger=migrator t=2024-05-23T17:02:35.577262692Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=874.708µs grafana | logger=migrator t=2024-05-23T17:02:35.580659442Z level=info msg="Executing migration" id="update login field with orgid to allow for multiple service accounts with same name across orgs" grafana | logger=migrator t=2024-05-23T17:02:35.581059335Z level=info msg="Migration successfully executed" id="update login field with orgid to allow for multiple service accounts with same name across orgs" duration=402.383µs grafana | logger=migrator t=2024-05-23T17:02:35.584481355Z level=info msg="Executing migration" id="create temp user table v1-7" grafana | logger=migrator t=2024-05-23T17:02:35.585817917Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=1.337662ms grafana | logger=migrator t=2024-05-23T17:02:35.591106044Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" grafana | logger=migrator t=2024-05-23T17:02:35.591881461Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=774.957µs grafana | logger=migrator t=2024-05-23T17:02:35.596825104Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" grafana | logger=migrator t=2024-05-23T17:02:35.598019755Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=1.194661ms grafana | logger=migrator t=2024-05-23T17:02:35.601453425Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" grafana | logger=migrator t=2024-05-23T17:02:35.602576735Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=1.12361ms grafana | logger=migrator t=2024-05-23T17:02:35.607480368Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" grafana | logger=migrator t=2024-05-23T17:02:35.608219295Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=738.847µs grafana | logger=migrator t=2024-05-23T17:02:35.61114671Z level=info msg="Executing migration" id="Update temp_user table charset" grafana | logger=migrator t=2024-05-23T17:02:35.61117092Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=24.94µs grafana | logger=migrator t=2024-05-23T17:02:35.613995945Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" grafana | logger=migrator t=2024-05-23T17:02:35.614667461Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=673.406µs grafana | logger=migrator t=2024-05-23T17:02:35.620081169Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" grafana | logger=migrator t=2024-05-23T17:02:35.620782705Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=701.346µs grafana | logger=migrator t=2024-05-23T17:02:35.623794962Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" grafana | logger=migrator t=2024-05-23T17:02:35.624481248Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=683.626µs grafana | logger=migrator t=2024-05-23T17:02:35.627378363Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" grafana | logger=migrator t=2024-05-23T17:02:35.628020439Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=641.356µs grafana | logger=migrator t=2024-05-23T17:02:35.634248944Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" grafana | logger=migrator t=2024-05-23T17:02:35.637510673Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=3.261989ms grafana | logger=migrator t=2024-05-23T17:02:35.642316225Z level=info msg="Executing migration" id="create temp_user v2" grafana | logger=migrator t=2024-05-23T17:02:35.643208932Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=892.717µs grafana | logger=migrator t=2024-05-23T17:02:35.645929256Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" grafana | logger=migrator t=2024-05-23T17:02:35.646655773Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=725.997µs grafana | logger=migrator t=2024-05-23T17:02:35.650618768Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" grafana | logger=migrator t=2024-05-23T17:02:35.651340064Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=721.496µs grafana | logger=migrator t=2024-05-23T17:02:35.68384984Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" grafana | logger=migrator t=2024-05-23T17:02:35.685219472Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=1.369042ms grafana | logger=migrator t=2024-05-23T17:02:35.688644552Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" grafana | logger=migrator t=2024-05-23T17:02:35.689323058Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=678.356µs grafana | logger=migrator t=2024-05-23T17:02:35.694871547Z level=info msg="Executing migration" id="copy temp_user v1 to v2" grafana | logger=migrator t=2024-05-23T17:02:35.6952384Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=367.183µs grafana | logger=migrator t=2024-05-23T17:02:35.698512029Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" grafana | logger=migrator t=2024-05-23T17:02:35.698998623Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=486.574µs grafana | logger=migrator t=2024-05-23T17:02:35.701796138Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" grafana | logger=migrator t=2024-05-23T17:02:35.702136311Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=341.153µs grafana | logger=migrator t=2024-05-23T17:02:35.706899603Z level=info msg="Executing migration" id="create star table" grafana | logger=migrator t=2024-05-23T17:02:35.707478378Z level=info msg="Migration successfully executed" id="create star table" duration=578.635µs grafana | logger=migrator t=2024-05-23T17:02:35.712687993Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" grafana | logger=migrator t=2024-05-23T17:02:35.713357619Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=690.246µs grafana | logger=migrator t=2024-05-23T17:02:35.716264695Z level=info msg="Executing migration" id="create org table v1" grafana | logger=migrator t=2024-05-23T17:02:35.716968241Z level=info msg="Migration successfully executed" id="create org table v1" duration=703.356µs grafana | logger=migrator t=2024-05-23T17:02:35.742113703Z level=info msg="Executing migration" id="create index UQE_org_name - v1" grafana | logger=migrator t=2024-05-23T17:02:35.742770948Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=656.675µs grafana | logger=migrator t=2024-05-23T17:02:35.745892596Z level=info msg="Executing migration" id="create org_user table v1" grafana | logger=migrator t=2024-05-23T17:02:35.746516171Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=623.395µs grafana | logger=migrator t=2024-05-23T17:02:35.750633058Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" grafana | logger=migrator t=2024-05-23T17:02:35.751284253Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=651.015µs grafana | logger=migrator t=2024-05-23T17:02:35.754066028Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" grafana | logger=migrator t=2024-05-23T17:02:35.754893725Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=827.457µs grafana | logger=migrator t=2024-05-23T17:02:35.760895608Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" grafana | logger=migrator t=2024-05-23T17:02:35.761627745Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=729.347µs grafana | logger=migrator t=2024-05-23T17:02:35.766088774Z level=info msg="Executing migration" id="Update org table charset" grafana | logger=migrator t=2024-05-23T17:02:35.766114524Z level=info msg="Migration successfully executed" id="Update org table charset" duration=26.76µs grafana | logger=migrator t=2024-05-23T17:02:35.770425522Z level=info msg="Executing migration" id="Update org_user table charset" grafana | logger=migrator t=2024-05-23T17:02:35.770454462Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=32.17µs grafana | logger=migrator t=2024-05-23T17:02:35.7735778Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" grafana | logger=migrator t=2024-05-23T17:02:35.773841532Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=263.572µs grafana | logger=migrator t=2024-05-23T17:02:35.776401264Z level=info msg="Executing migration" id="create dashboard table" grafana | logger=migrator t=2024-05-23T17:02:35.777186031Z level=info msg="Migration successfully executed" id="create dashboard table" duration=783.597µs grafana | logger=migrator t=2024-05-23T17:02:35.780495631Z level=info msg="Executing migration" id="add index dashboard.account_id" grafana | logger=migrator t=2024-05-23T17:02:35.781310038Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=813.987µs grafana | logger=migrator t=2024-05-23T17:02:35.785961339Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" grafana | logger=migrator t=2024-05-23T17:02:35.786811776Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=849.597µs grafana | logger=migrator t=2024-05-23T17:02:35.789854113Z level=info msg="Executing migration" id="create dashboard_tag table" grafana | logger=migrator t=2024-05-23T17:02:35.790569819Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=712.306µs grafana | logger=migrator t=2024-05-23T17:02:35.793389794Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" grafana | logger=migrator t=2024-05-23T17:02:35.794127621Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=737.217µs grafana | logger=migrator t=2024-05-23T17:02:35.798636101Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" grafana | logger=migrator t=2024-05-23T17:02:35.799537439Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=901.317µs grafana | logger=migrator t=2024-05-23T17:02:35.804752464Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" grafana | logger=migrator t=2024-05-23T17:02:35.809483896Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=4.730762ms grafana | logger=migrator t=2024-05-23T17:02:35.813430201Z level=info msg="Executing migration" id="create dashboard v2" grafana | logger=migrator t=2024-05-23T17:02:35.814085826Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=652.095µs grafana | logger=migrator t=2024-05-23T17:02:35.818128692Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" grafana | logger=migrator t=2024-05-23T17:02:35.818853738Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=724.576µs grafana | logger=migrator t=2024-05-23T17:02:35.822225118Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" grafana | logger=migrator t=2024-05-23T17:02:35.822938624Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=712.926µs grafana | logger=migrator t=2024-05-23T17:02:35.828197561Z level=info msg="Executing migration" id="copy dashboard v1 to v2" grafana | logger=migrator t=2024-05-23T17:02:35.828490323Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=293.182µs grafana | logger=migrator t=2024-05-23T17:02:35.832801441Z level=info msg="Executing migration" id="drop table dashboard_v1" grafana | logger=migrator t=2024-05-23T17:02:35.833989682Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=1.188551ms grafana | logger=migrator t=2024-05-23T17:02:35.837435062Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" grafana | logger=migrator t=2024-05-23T17:02:35.837532563Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=98.921µs grafana | logger=migrator t=2024-05-23T17:02:35.840188517Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" grafana | logger=migrator t=2024-05-23T17:02:35.842507567Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=2.31971ms grafana | logger=migrator t=2024-05-23T17:02:35.848429259Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" grafana | logger=migrator t=2024-05-23T17:02:35.850406346Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.976697ms grafana | logger=migrator t=2024-05-23T17:02:35.854031418Z level=info msg="Executing migration" id="Add column gnetId in dashboard" grafana | logger=migrator t=2024-05-23T17:02:35.856017746Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.984948ms grafana | logger=migrator t=2024-05-23T17:02:35.85874633Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" grafana | logger=migrator t=2024-05-23T17:02:35.85992963Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=1.1853ms grafana | logger=migrator t=2024-05-23T17:02:35.864041927Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" grafana | logger=migrator t=2024-05-23T17:02:35.866034314Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.991447ms grafana | logger=migrator t=2024-05-23T17:02:35.869626276Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" grafana | logger=migrator t=2024-05-23T17:02:35.870545614Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=916.768µs grafana | logger=migrator t=2024-05-23T17:02:35.874075695Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" grafana | logger=migrator t=2024-05-23T17:02:35.874929262Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=853.437µs grafana | logger=migrator t=2024-05-23T17:02:35.879094789Z level=info msg="Executing migration" id="Update dashboard table charset" grafana | logger=migrator t=2024-05-23T17:02:35.879127469Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=32.76µs grafana | logger=migrator t=2024-05-23T17:02:35.881812633Z level=info msg="Executing migration" id="Update dashboard_tag table charset" grafana | logger=migrator t=2024-05-23T17:02:35.881841933Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=29.81µs grafana | logger=migrator t=2024-05-23T17:02:35.914126567Z level=info msg="Executing migration" id="Add column folder_id in dashboard" grafana | logger=migrator t=2024-05-23T17:02:35.917523407Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=3.39659ms grafana | logger=migrator t=2024-05-23T17:02:35.921398652Z level=info msg="Executing migration" id="Add column isFolder in dashboard" grafana | logger=migrator t=2024-05-23T17:02:35.923642081Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=2.244279ms grafana | logger=migrator t=2024-05-23T17:02:35.927704387Z level=info msg="Executing migration" id="Add column has_acl in dashboard" grafana | logger=migrator t=2024-05-23T17:02:35.930001327Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=2.2958ms grafana | logger=migrator t=2024-05-23T17:02:35.9337077Z level=info msg="Executing migration" id="Add column uid in dashboard" grafana | logger=migrator t=2024-05-23T17:02:35.936189712Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=2.483822ms grafana | logger=migrator t=2024-05-23T17:02:35.939235069Z level=info msg="Executing migration" id="Update uid column values in dashboard" grafana | logger=migrator t=2024-05-23T17:02:35.939623392Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=387.613µs grafana | logger=migrator t=2024-05-23T17:02:35.943752758Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" grafana | logger=migrator t=2024-05-23T17:02:35.944674256Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=920.918µs grafana | logger=migrator t=2024-05-23T17:02:35.950319026Z level=info msg="Executing migration" id="Remove unique index org_id_slug" grafana | logger=migrator t=2024-05-23T17:02:35.951107283Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=789.017µs grafana | logger=migrator t=2024-05-23T17:02:35.956295749Z level=info msg="Executing migration" id="Update dashboard title length" grafana | logger=migrator t=2024-05-23T17:02:35.956320809Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=26.13µs grafana | logger=migrator t=2024-05-23T17:02:35.960377675Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" grafana | logger=migrator t=2024-05-23T17:02:35.961157252Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=779.057µs grafana | logger=migrator t=2024-05-23T17:02:35.964156998Z level=info msg="Executing migration" id="create dashboard_provisioning" grafana | logger=migrator t=2024-05-23T17:02:35.964893275Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=735.637µs grafana | logger=migrator t=2024-05-23T17:02:35.96778515Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" grafana | logger=migrator t=2024-05-23T17:02:35.972731214Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=4.947964ms grafana | logger=migrator t=2024-05-23T17:02:35.977372404Z level=info msg="Executing migration" id="create dashboard_provisioning v2" grafana | logger=migrator t=2024-05-23T17:02:35.97799049Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=617.806µs grafana | logger=migrator t=2024-05-23T17:02:35.980861805Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" grafana | logger=migrator t=2024-05-23T17:02:35.981566271Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=704.066µs grafana | logger=migrator t=2024-05-23T17:02:35.986943329Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" grafana | logger=migrator t=2024-05-23T17:02:35.988059029Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=1.1126ms grafana | logger=migrator t=2024-05-23T17:02:35.99390734Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" grafana | logger=migrator t=2024-05-23T17:02:35.994213453Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=306.233µs grafana | logger=migrator t=2024-05-23T17:02:35.996392712Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" grafana | logger=migrator t=2024-05-23T17:02:35.996872596Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=479.894µs grafana | logger=migrator t=2024-05-23T17:02:35.999659271Z level=info msg="Executing migration" id="Add check_sum column" grafana | logger=migrator t=2024-05-23T17:02:36.001615978Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=1.956337ms grafana | logger=migrator t=2024-05-23T17:02:36.004598344Z level=info msg="Executing migration" id="Add index for dashboard_title" grafana | logger=migrator t=2024-05-23T17:02:36.005395961Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=797.627µs grafana | logger=migrator t=2024-05-23T17:02:36.009647769Z level=info msg="Executing migration" id="delete tags for deleted dashboards" grafana | logger=migrator t=2024-05-23T17:02:36.009872711Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=224.752µs grafana | logger=migrator t=2024-05-23T17:02:36.014116708Z level=info msg="Executing migration" id="delete stars for deleted dashboards" grafana | logger=migrator t=2024-05-23T17:02:36.01437229Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=254.972µs grafana | logger=migrator t=2024-05-23T17:02:36.01770556Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" grafana | logger=migrator t=2024-05-23T17:02:36.018501657Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=796.027µs grafana | logger=migrator t=2024-05-23T17:02:36.022739254Z level=info msg="Executing migration" id="Add isPublic for dashboard" grafana | logger=migrator t=2024-05-23T17:02:36.024766842Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.027338ms grafana | logger=migrator t=2024-05-23T17:02:36.027494546Z level=info msg="Executing migration" id="create data_source table" grafana | logger=migrator t=2024-05-23T17:02:36.028392424Z level=info msg="Migration successfully executed" id="create data_source table" duration=897.718µs grafana | logger=migrator t=2024-05-23T17:02:36.035186733Z level=info msg="Executing migration" id="add index data_source.account_id" grafana | logger=migrator t=2024-05-23T17:02:36.036310323Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=1.12296ms grafana | logger=migrator t=2024-05-23T17:02:36.041201087Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" grafana | logger=migrator t=2024-05-23T17:02:36.041917703Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=720.737µs grafana | logger=migrator t=2024-05-23T17:02:36.044920789Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" grafana | logger=migrator t=2024-05-23T17:02:36.045553915Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=632.946µs grafana | logger=migrator t=2024-05-23T17:02:36.048260979Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" grafana | logger=migrator t=2024-05-23T17:02:36.048915964Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=654.205µs grafana | logger=migrator t=2024-05-23T17:02:36.053313233Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" grafana | logger=migrator t=2024-05-23T17:02:36.059158234Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=5.844521ms grafana | logger=migrator t=2024-05-23T17:02:36.062114001Z level=info msg="Executing migration" id="create data_source table v2" grafana | logger=migrator t=2024-05-23T17:02:36.062911838Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=797.588µs grafana | logger=migrator t=2024-05-23T17:02:36.065671982Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" grafana | logger=migrator t=2024-05-23T17:02:36.066528189Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=856.367µs grafana | logger=migrator t=2024-05-23T17:02:36.070683686Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" grafana | logger=migrator t=2024-05-23T17:02:36.071466733Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=783.057µs grafana | logger=migrator t=2024-05-23T17:02:36.07907384Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" grafana | logger=migrator t=2024-05-23T17:02:36.079691375Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=613.985µs grafana | logger=migrator t=2024-05-23T17:02:36.085123183Z level=info msg="Executing migration" id="Add column with_credentials" grafana | logger=migrator t=2024-05-23T17:02:36.087310972Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=2.187149ms grafana | logger=migrator t=2024-05-23T17:02:36.090620091Z level=info msg="Executing migration" id="Add secure json data column" grafana | logger=migrator t=2024-05-23T17:02:36.093085723Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.467802ms grafana | logger=migrator t=2024-05-23T17:02:36.10638141Z level=info msg="Executing migration" id="Update data_source table charset" grafana | logger=migrator t=2024-05-23T17:02:36.10642599Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=46.64µs grafana | logger=migrator t=2024-05-23T17:02:36.110709568Z level=info msg="Executing migration" id="Update initial version to 1" grafana | logger=migrator t=2024-05-23T17:02:36.11096213Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=251.912µs grafana | logger=migrator t=2024-05-23T17:02:36.115140547Z level=info msg="Executing migration" id="Add read_only data column" grafana | logger=migrator t=2024-05-23T17:02:36.118069923Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=2.930366ms grafana | logger=migrator t=2024-05-23T17:02:36.189110528Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" grafana | logger=migrator t=2024-05-23T17:02:36.189618883Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=508.145µs grafana | logger=migrator t=2024-05-23T17:02:36.195564745Z level=info msg="Executing migration" id="Update json_data with nulls" grafana | logger=migrator t=2024-05-23T17:02:36.196054159Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=488.854µs grafana | logger=migrator t=2024-05-23T17:02:36.199860343Z level=info msg="Executing migration" id="Add uid column" grafana | logger=migrator t=2024-05-23T17:02:36.202450416Z level=info msg="Migration successfully executed" id="Add uid column" duration=2.587113ms grafana | logger=migrator t=2024-05-23T17:02:36.20748132Z level=info msg="Executing migration" id="Update uid value" grafana | logger=migrator t=2024-05-23T17:02:36.207743842Z level=info msg="Migration successfully executed" id="Update uid value" duration=261.972µs grafana | logger=migrator t=2024-05-23T17:02:36.211031781Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" grafana | logger=migrator t=2024-05-23T17:02:36.211946039Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=914.048µs grafana | logger=migrator t=2024-05-23T17:02:36.215050916Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" grafana | logger=migrator t=2024-05-23T17:02:36.216130376Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=1.07895ms grafana | logger=migrator t=2024-05-23T17:02:36.220525675Z level=info msg="Executing migration" id="create api_key table" grafana | logger=migrator t=2024-05-23T17:02:36.221470693Z level=info msg="Migration successfully executed" id="create api_key table" duration=944.538µs grafana | logger=migrator t=2024-05-23T17:02:36.225249266Z level=info msg="Executing migration" id="add index api_key.account_id" grafana | logger=migrator t=2024-05-23T17:02:36.226182784Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=931.218µs grafana | logger=migrator t=2024-05-23T17:02:36.233773731Z level=info msg="Executing migration" id="add index api_key.key" grafana | logger=migrator t=2024-05-23T17:02:36.235396645Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=1.622864ms grafana | logger=migrator t=2024-05-23T17:02:36.241222347Z level=info msg="Executing migration" id="add index api_key.account_id_name" grafana | logger=migrator t=2024-05-23T17:02:36.242105194Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=882.557µs grafana | logger=migrator t=2024-05-23T17:02:36.245547015Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" grafana | logger=migrator t=2024-05-23T17:02:36.246354242Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=807.007µs grafana | logger=migrator t=2024-05-23T17:02:36.251190705Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" grafana | logger=migrator t=2024-05-23T17:02:36.252035892Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=845.087µs grafana | logger=migrator t=2024-05-23T17:02:36.260623937Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" grafana | logger=migrator t=2024-05-23T17:02:36.261695187Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=1.07354ms grafana | logger=migrator t=2024-05-23T17:02:36.266465489Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" grafana | logger=migrator t=2024-05-23T17:02:36.27683035Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=10.365981ms grafana | logger=migrator t=2024-05-23T17:02:36.308086915Z level=info msg="Executing migration" id="create api_key table v2" grafana | logger=migrator t=2024-05-23T17:02:36.309510578Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=1.422433ms grafana | logger=migrator t=2024-05-23T17:02:36.358425688Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" grafana | logger=migrator t=2024-05-23T17:02:36.359933972Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=1.508613ms grafana | logger=migrator t=2024-05-23T17:02:36.375282996Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" grafana | logger=migrator t=2024-05-23T17:02:36.376526527Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=1.243831ms grafana | logger=migrator t=2024-05-23T17:02:36.429138961Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" grafana | logger=migrator t=2024-05-23T17:02:36.430858626Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=1.721985ms grafana | logger=migrator t=2024-05-23T17:02:36.470213612Z level=info msg="Executing migration" id="copy api_key v1 to v2" grafana | logger=migrator t=2024-05-23T17:02:36.470861648Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=648.066µs grafana | logger=migrator t=2024-05-23T17:02:36.477471666Z level=info msg="Executing migration" id="Drop old table api_key_v1" grafana | logger=migrator t=2024-05-23T17:02:36.478368404Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=896.518µs grafana | logger=migrator t=2024-05-23T17:02:36.510236004Z level=info msg="Executing migration" id="Update api_key table charset" grafana | logger=migrator t=2024-05-23T17:02:36.510278555Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=45.591µs grafana | logger=migrator t=2024-05-23T17:02:36.517794101Z level=info msg="Executing migration" id="Add expires to api_key table" grafana | logger=migrator t=2024-05-23T17:02:36.521267011Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=3.47349ms grafana | logger=migrator t=2024-05-23T17:02:36.553324822Z level=info msg="Executing migration" id="Add service account foreign key" grafana | logger=migrator t=2024-05-23T17:02:36.555427641Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.105699ms grafana | logger=migrator t=2024-05-23T17:02:36.589142838Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" grafana | logger=migrator t=2024-05-23T17:02:36.589574922Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=432.134µs grafana | logger=migrator t=2024-05-23T17:02:36.595994209Z level=info msg="Executing migration" id="Add last_used_at to api_key table" grafana | logger=migrator t=2024-05-23T17:02:36.598868944Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=2.874275ms grafana | logger=migrator t=2024-05-23T17:02:36.605789855Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" grafana | logger=migrator t=2024-05-23T17:02:36.608680571Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.889756ms grafana | logger=migrator t=2024-05-23T17:02:36.613793296Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" grafana | logger=migrator t=2024-05-23T17:02:36.614689314Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=895.938µs grafana | logger=migrator t=2024-05-23T17:02:36.649404739Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" grafana | logger=migrator t=2024-05-23T17:02:36.650865862Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=1.460853ms grafana | logger=migrator t=2024-05-23T17:02:36.656203939Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" grafana | logger=migrator t=2024-05-23T17:02:36.657116067Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=908.528µs grafana | logger=migrator t=2024-05-23T17:02:36.661835798Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" grafana | logger=migrator t=2024-05-23T17:02:36.662635345Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=799.677µs grafana | logger=migrator t=2024-05-23T17:02:36.665801103Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" grafana | logger=migrator t=2024-05-23T17:02:36.666886893Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=1.08504ms grafana | logger=migrator t=2024-05-23T17:02:36.670707966Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" grafana | logger=migrator t=2024-05-23T17:02:36.671526264Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=818.348µs grafana | logger=migrator t=2024-05-23T17:02:36.677592557Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" grafana | logger=migrator t=2024-05-23T17:02:36.677692998Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=102.221µs grafana | logger=migrator t=2024-05-23T17:02:36.681549192Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" grafana | logger=migrator t=2024-05-23T17:02:36.681595802Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=48.75µs grafana | logger=migrator t=2024-05-23T17:02:36.686271563Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" grafana | logger=migrator t=2024-05-23T17:02:36.690653722Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=4.381589ms grafana | logger=migrator t=2024-05-23T17:02:36.69605228Z level=info msg="Executing migration" id="Add encrypted dashboard json column" grafana | logger=migrator t=2024-05-23T17:02:36.698964015Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.911405ms grafana | logger=migrator t=2024-05-23T17:02:36.702081743Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" grafana | logger=migrator t=2024-05-23T17:02:36.702147133Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=66.21µs grafana | logger=migrator t=2024-05-23T17:02:36.705272341Z level=info msg="Executing migration" id="create quota table v1" grafana | logger=migrator t=2024-05-23T17:02:36.706004677Z level=info msg="Migration successfully executed" id="create quota table v1" duration=733.706µs grafana | logger=migrator t=2024-05-23T17:02:36.709276956Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" grafana | logger=migrator t=2024-05-23T17:02:36.710088493Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=811.087µs grafana | logger=migrator t=2024-05-23T17:02:36.713799866Z level=info msg="Executing migration" id="Update quota table charset" grafana | logger=migrator t=2024-05-23T17:02:36.713825266Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=26.32µs grafana | logger=migrator t=2024-05-23T17:02:36.717346867Z level=info msg="Executing migration" id="create plugin_setting table" grafana | logger=migrator t=2024-05-23T17:02:36.718126204Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=779.207µs grafana | logger=migrator t=2024-05-23T17:02:36.721518594Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" grafana | logger=migrator t=2024-05-23T17:02:36.722366721Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=847.767µs grafana | logger=migrator t=2024-05-23T17:02:36.727103543Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" grafana | logger=migrator t=2024-05-23T17:02:36.730108189Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=3.004526ms grafana | logger=migrator t=2024-05-23T17:02:36.733946873Z level=info msg="Executing migration" id="Update plugin_setting table charset" grafana | logger=migrator t=2024-05-23T17:02:36.733972303Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=26.2µs grafana | logger=migrator t=2024-05-23T17:02:36.737967548Z level=info msg="Executing migration" id="create session table" grafana | logger=migrator t=2024-05-23T17:02:36.738800526Z level=info msg="Migration successfully executed" id="create session table" duration=832.688µs grafana | logger=migrator t=2024-05-23T17:02:36.743608528Z level=info msg="Executing migration" id="Drop old table playlist table" grafana | logger=migrator t=2024-05-23T17:02:36.743726829Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=119.251µs grafana | logger=migrator t=2024-05-23T17:02:36.747895836Z level=info msg="Executing migration" id="Drop old table playlist_item table" grafana | logger=migrator t=2024-05-23T17:02:36.748105348Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=212.652µs grafana | logger=migrator t=2024-05-23T17:02:36.753329154Z level=info msg="Executing migration" id="create playlist table v2" grafana | logger=migrator t=2024-05-23T17:02:36.754180451Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=856.497µs grafana | logger=migrator t=2024-05-23T17:02:36.75753662Z level=info msg="Executing migration" id="create playlist item table v2" grafana | logger=migrator t=2024-05-23T17:02:36.758275887Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=739.287µs grafana | logger=migrator t=2024-05-23T17:02:36.761559286Z level=info msg="Executing migration" id="Update playlist table charset" grafana | logger=migrator t=2024-05-23T17:02:36.761584626Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=26.16µs grafana | logger=migrator t=2024-05-23T17:02:36.766126986Z level=info msg="Executing migration" id="Update playlist_item table charset" grafana | logger=migrator t=2024-05-23T17:02:36.766150596Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=27.11µs grafana | logger=migrator t=2024-05-23T17:02:36.772081489Z level=info msg="Executing migration" id="Add playlist column created_at" grafana | logger=migrator t=2024-05-23T17:02:36.775247666Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=3.159267ms grafana | logger=migrator t=2024-05-23T17:02:36.778440714Z level=info msg="Executing migration" id="Add playlist column updated_at" grafana | logger=migrator t=2024-05-23T17:02:36.78138123Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=2.940046ms grafana | logger=migrator t=2024-05-23T17:02:36.785050613Z level=info msg="Executing migration" id="drop preferences table v2" grafana | logger=migrator t=2024-05-23T17:02:36.785187444Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=140.851µs grafana | logger=migrator t=2024-05-23T17:02:36.788359762Z level=info msg="Executing migration" id="drop preferences table v3" grafana | logger=migrator t=2024-05-23T17:02:36.788435193Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=76.361µs grafana | logger=migrator t=2024-05-23T17:02:36.79152563Z level=info msg="Executing migration" id="create preferences table v3" grafana | logger=migrator t=2024-05-23T17:02:36.792394537Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=868.257µs grafana | logger=migrator t=2024-05-23T17:02:36.795814198Z level=info msg="Executing migration" id="Update preferences table charset" grafana | logger=migrator t=2024-05-23T17:02:36.795839328Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=26.19µs grafana | logger=migrator t=2024-05-23T17:02:36.803078651Z level=info msg="Executing migration" id="Add column team_id in preferences" grafana | logger=migrator t=2024-05-23T17:02:36.806415471Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=3.3362ms grafana | logger=migrator t=2024-05-23T17:02:36.814035868Z level=info msg="Executing migration" id="Update team_id column values in preferences" grafana | logger=migrator t=2024-05-23T17:02:36.81423453Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=199.772µs grafana | logger=migrator t=2024-05-23T17:02:36.818369046Z level=info msg="Executing migration" id="Add column week_start in preferences" grafana | logger=migrator t=2024-05-23T17:02:36.825069475Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=6.699659ms grafana | logger=migrator t=2024-05-23T17:02:36.831650703Z level=info msg="Executing migration" id="Add column preferences.json_data" grafana | logger=migrator t=2024-05-23T17:02:36.836845109Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=5.194226ms grafana | logger=migrator t=2024-05-23T17:02:36.841705962Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" grafana | logger=migrator t=2024-05-23T17:02:36.841787352Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=83.01µs grafana | logger=migrator t=2024-05-23T17:02:36.908281337Z level=info msg="Executing migration" id="Add preferences index org_id" grafana | logger=migrator t=2024-05-23T17:02:36.9097245Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=1.449633ms grafana | logger=migrator t=2024-05-23T17:02:36.91656174Z level=info msg="Executing migration" id="Add preferences index user_id" grafana | logger=migrator t=2024-05-23T17:02:36.917799311Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=1.237411ms grafana | logger=migrator t=2024-05-23T17:02:36.923449121Z level=info msg="Executing migration" id="create alert table v1" grafana | logger=migrator t=2024-05-23T17:02:36.925357918Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.908187ms grafana | logger=migrator t=2024-05-23T17:02:36.930936387Z level=info msg="Executing migration" id="add index alert org_id & id " grafana | logger=migrator t=2024-05-23T17:02:36.931933626Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=997.059µs grafana | logger=migrator t=2024-05-23T17:02:36.936320044Z level=info msg="Executing migration" id="add index alert state" grafana | logger=migrator t=2024-05-23T17:02:36.937396203Z level=info msg="Migration successfully executed" id="add index alert state" duration=1.076319ms grafana | logger=migrator t=2024-05-23T17:02:36.942121935Z level=info msg="Executing migration" id="add index alert dashboard_id" grafana | logger=migrator t=2024-05-23T17:02:36.943132264Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=1.006319ms grafana | logger=migrator t=2024-05-23T17:02:36.948654393Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" grafana | logger=migrator t=2024-05-23T17:02:36.950470969Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=1.823507ms grafana | logger=migrator t=2024-05-23T17:02:36.957189378Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" grafana | logger=migrator t=2024-05-23T17:02:36.95859496Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=1.409042ms grafana | logger=migrator t=2024-05-23T17:02:36.96424336Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" grafana | logger=migrator t=2024-05-23T17:02:36.965586771Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=1.347511ms grafana | logger=migrator t=2024-05-23T17:02:36.970434014Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" grafana | logger=migrator t=2024-05-23T17:02:36.981118558Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=10.682874ms grafana | logger=migrator t=2024-05-23T17:02:36.984441107Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" grafana | logger=migrator t=2024-05-23T17:02:36.985194274Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=753.017µs grafana | logger=migrator t=2024-05-23T17:02:36.988545984Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" grafana | logger=migrator t=2024-05-23T17:02:36.989495932Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=948.528µs grafana | logger=migrator t=2024-05-23T17:02:36.995520045Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" grafana | logger=migrator t=2024-05-23T17:02:36.995821198Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=304.973µs grafana | logger=migrator t=2024-05-23T17:02:37.001488148Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" grafana | logger=migrator t=2024-05-23T17:02:37.002334075Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=841.407µs grafana | logger=migrator t=2024-05-23T17:02:37.008086655Z level=info msg="Executing migration" id="create alert_notification table v1" grafana | logger=migrator t=2024-05-23T17:02:37.008911823Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=829.698µs grafana | logger=migrator t=2024-05-23T17:02:37.01421872Z level=info msg="Executing migration" id="Add column is_default" grafana | logger=migrator t=2024-05-23T17:02:37.018084764Z level=info msg="Migration successfully executed" id="Add column is_default" duration=3.872105ms grafana | logger=migrator t=2024-05-23T17:02:37.024844773Z level=info msg="Executing migration" id="Add column frequency" grafana | logger=migrator t=2024-05-23T17:02:37.028280583Z level=info msg="Migration successfully executed" id="Add column frequency" duration=3.4353ms grafana | logger=migrator t=2024-05-23T17:02:37.031596832Z level=info msg="Executing migration" id="Add column send_reminder" grafana | logger=migrator t=2024-05-23T17:02:37.035349175Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.752093ms grafana | logger=migrator t=2024-05-23T17:02:37.039000377Z level=info msg="Executing migration" id="Add column disable_resolve_message" grafana | logger=migrator t=2024-05-23T17:02:37.043243725Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=4.243488ms grafana | logger=migrator t=2024-05-23T17:02:37.04952283Z level=info msg="Executing migration" id="add index alert_notification org_id & name" grafana | logger=migrator t=2024-05-23T17:02:37.050407788Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=884.688µs grafana | logger=migrator t=2024-05-23T17:02:37.053825948Z level=info msg="Executing migration" id="Update alert table charset" grafana | logger=migrator t=2024-05-23T17:02:37.053852228Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=29.68µs grafana | logger=migrator t=2024-05-23T17:02:37.0563774Z level=info msg="Executing migration" id="Update alert_notification table charset" grafana | logger=migrator t=2024-05-23T17:02:37.056402001Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=25.411µs grafana | logger=migrator t=2024-05-23T17:02:37.064116208Z level=info msg="Executing migration" id="create notification_journal table v1" grafana | logger=migrator t=2024-05-23T17:02:37.065090257Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=973.449µs grafana | logger=migrator t=2024-05-23T17:02:37.069904089Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" grafana | logger=migrator t=2024-05-23T17:02:37.070728647Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=824.168µs grafana | logger=migrator t=2024-05-23T17:02:37.074079106Z level=info msg="Executing migration" id="drop alert_notification_journal" grafana | logger=migrator t=2024-05-23T17:02:37.074738232Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=658.936µs grafana | logger=migrator t=2024-05-23T17:02:37.08247512Z level=info msg="Executing migration" id="create alert_notification_state table v1" grafana | logger=migrator t=2024-05-23T17:02:37.083274297Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=798.777µs grafana | logger=migrator t=2024-05-23T17:02:37.089376251Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" grafana | logger=migrator t=2024-05-23T17:02:37.090349249Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=972.518µs grafana | logger=migrator t=2024-05-23T17:02:37.094626037Z level=info msg="Executing migration" id="Add for to alert table" grafana | logger=migrator t=2024-05-23T17:02:37.097455651Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=2.829444ms grafana | logger=migrator t=2024-05-23T17:02:37.102615197Z level=info msg="Executing migration" id="Add column uid in alert_notification" grafana | logger=migrator t=2024-05-23T17:02:37.10527965Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=2.664023ms grafana | logger=migrator t=2024-05-23T17:02:37.144634717Z level=info msg="Executing migration" id="Update uid column values in alert_notification" grafana | logger=migrator t=2024-05-23T17:02:37.145316662Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=683.785µs grafana | logger=migrator t=2024-05-23T17:02:37.149429448Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" grafana | logger=migrator t=2024-05-23T17:02:37.151001682Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=1.570444ms grafana | logger=migrator t=2024-05-23T17:02:37.156348199Z level=info msg="Executing migration" id="Remove unique index org_id_name" grafana | logger=migrator t=2024-05-23T17:02:37.157194897Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=847.008µs grafana | logger=migrator t=2024-05-23T17:02:37.160222714Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" grafana | logger=migrator t=2024-05-23T17:02:37.16434024Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=4.116345ms grafana | logger=migrator t=2024-05-23T17:02:37.168726048Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" grafana | logger=migrator t=2024-05-23T17:02:37.168830399Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=106.151µs grafana | logger=migrator t=2024-05-23T17:02:37.204509643Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" grafana | logger=migrator t=2024-05-23T17:02:37.206242128Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=1.732855ms grafana | logger=migrator t=2024-05-23T17:02:37.210306024Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" grafana | logger=migrator t=2024-05-23T17:02:37.211633736Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=1.327912ms grafana | logger=migrator t=2024-05-23T17:02:37.21551646Z level=info msg="Executing migration" id="Drop old annotation table v4" grafana | logger=migrator t=2024-05-23T17:02:37.215647941Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=132.571µs grafana | logger=migrator t=2024-05-23T17:02:37.220495653Z level=info msg="Executing migration" id="create annotation table v5" grafana | logger=migrator t=2024-05-23T17:02:37.222026777Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=1.531414ms grafana | logger=migrator t=2024-05-23T17:02:37.226005952Z level=info msg="Executing migration" id="add index annotation 0 v3" grafana | logger=migrator t=2024-05-23T17:02:37.227670547Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=1.668015ms grafana | logger=migrator t=2024-05-23T17:02:37.235222083Z level=info msg="Executing migration" id="add index annotation 1 v3" grafana | logger=migrator t=2024-05-23T17:02:37.23718342Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=1.961317ms grafana | logger=migrator t=2024-05-23T17:02:37.245276872Z level=info msg="Executing migration" id="add index annotation 2 v3" grafana | logger=migrator t=2024-05-23T17:02:37.24621875Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=941.578µs grafana | logger=migrator t=2024-05-23T17:02:37.251587757Z level=info msg="Executing migration" id="add index annotation 3 v3" grafana | logger=migrator t=2024-05-23T17:02:37.252949679Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.361432ms grafana | logger=migrator t=2024-05-23T17:02:37.256647771Z level=info msg="Executing migration" id="add index annotation 4 v3" grafana | logger=migrator t=2024-05-23T17:02:37.257706821Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.05911ms grafana | logger=migrator t=2024-05-23T17:02:37.262412382Z level=info msg="Executing migration" id="Update annotation table charset" grafana | logger=migrator t=2024-05-23T17:02:37.262444533Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=29.711µs grafana | logger=migrator t=2024-05-23T17:02:37.288169379Z level=info msg="Executing migration" id="Add column region_id to annotation table" grafana | logger=migrator t=2024-05-23T17:02:37.295924427Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=7.751709ms grafana | logger=migrator t=2024-05-23T17:02:37.300702019Z level=info msg="Executing migration" id="Drop category_id index" grafana | logger=migrator t=2024-05-23T17:02:37.301315734Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=613.575µs grafana | logger=migrator t=2024-05-23T17:02:37.318748327Z level=info msg="Executing migration" id="Add column tags to annotation table" grafana | logger=migrator t=2024-05-23T17:02:37.328201971Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=9.457594ms grafana | logger=migrator t=2024-05-23T17:02:37.331933474Z level=info msg="Executing migration" id="Create annotation_tag table v2" grafana | logger=migrator t=2024-05-23T17:02:37.333036783Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=1.102419ms grafana | logger=migrator t=2024-05-23T17:02:37.336736516Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" grafana | logger=migrator t=2024-05-23T17:02:37.337539863Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=806.167µs grafana | logger=migrator t=2024-05-23T17:02:37.34285953Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" grafana | logger=migrator t=2024-05-23T17:02:37.343554466Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=695.046µs grafana | logger=migrator t=2024-05-23T17:02:37.372878623Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" grafana | logger=migrator t=2024-05-23T17:02:37.387786015Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=14.911072ms grafana | logger=migrator t=2024-05-23T17:02:37.391095624Z level=info msg="Executing migration" id="Create annotation_tag table v3" grafana | logger=migrator t=2024-05-23T17:02:37.391699299Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=604.095µs grafana | logger=migrator t=2024-05-23T17:02:37.399182885Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" grafana | logger=migrator t=2024-05-23T17:02:37.40087092Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=1.688945ms grafana | logger=migrator t=2024-05-23T17:02:37.407811501Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" grafana | logger=migrator t=2024-05-23T17:02:37.408158724Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=350.123µs grafana | logger=migrator t=2024-05-23T17:02:37.413838414Z level=info msg="Executing migration" id="drop table annotation_tag_v2" grafana | logger=migrator t=2024-05-23T17:02:37.414496Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=657.336µs grafana | logger=migrator t=2024-05-23T17:02:37.417830129Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" grafana | logger=migrator t=2024-05-23T17:02:37.418106531Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=276.662µs grafana | logger=migrator t=2024-05-23T17:02:37.422063056Z level=info msg="Executing migration" id="Add created time to annotation table" grafana | logger=migrator t=2024-05-23T17:02:37.428614274Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=6.547388ms grafana | logger=migrator t=2024-05-23T17:02:37.433933681Z level=info msg="Executing migration" id="Add updated time to annotation table" grafana | logger=migrator t=2024-05-23T17:02:37.438197468Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=4.266578ms grafana | logger=migrator t=2024-05-23T17:02:37.443118991Z level=info msg="Executing migration" id="Add index for created in annotation table" grafana | logger=migrator t=2024-05-23T17:02:37.44416619Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=1.046799ms grafana | logger=migrator t=2024-05-23T17:02:37.44869058Z level=info msg="Executing migration" id="Add index for updated in annotation table" grafana | logger=migrator t=2024-05-23T17:02:37.449552758Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=860.238µs grafana | logger=migrator t=2024-05-23T17:02:37.45660661Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" grafana | logger=migrator t=2024-05-23T17:02:37.456870672Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=258.582µs grafana | logger=migrator t=2024-05-23T17:02:37.460465974Z level=info msg="Executing migration" id="Add epoch_end column" grafana | logger=migrator t=2024-05-23T17:02:37.46569351Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=5.225386ms grafana | logger=migrator t=2024-05-23T17:02:37.470107739Z level=info msg="Executing migration" id="Add index for epoch_end" grafana | logger=migrator t=2024-05-23T17:02:37.471017707Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=908.888µs grafana | logger=migrator t=2024-05-23T17:02:37.474369896Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" grafana | logger=migrator t=2024-05-23T17:02:37.474546568Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=182.752µs grafana | logger=migrator t=2024-05-23T17:02:37.478903086Z level=info msg="Executing migration" id="Move region to single row" grafana | logger=migrator t=2024-05-23T17:02:37.479534781Z level=info msg="Migration successfully executed" id="Move region to single row" duration=631.395µs grafana | logger=migrator t=2024-05-23T17:02:37.484692747Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" grafana | logger=migrator t=2024-05-23T17:02:37.485533544Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=840.577µs grafana | logger=migrator t=2024-05-23T17:02:37.489237037Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" grafana | logger=migrator t=2024-05-23T17:02:37.49070044Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=1.462243ms grafana | logger=migrator t=2024-05-23T17:02:37.495094238Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" grafana | logger=migrator t=2024-05-23T17:02:37.496036426Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=941.998µs grafana | logger=migrator t=2024-05-23T17:02:37.499720989Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" grafana | logger=migrator t=2024-05-23T17:02:37.500619067Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=901.988µs grafana | logger=migrator t=2024-05-23T17:02:37.505030086Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" grafana | logger=migrator t=2024-05-23T17:02:37.505860723Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=830.357µs grafana | logger=migrator t=2024-05-23T17:02:37.51011685Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" grafana | logger=migrator t=2024-05-23T17:02:37.510995968Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=883.018µs grafana | logger=migrator t=2024-05-23T17:02:37.514375568Z level=info msg="Executing migration" id="Increase tags column to length 4096" grafana | logger=migrator t=2024-05-23T17:02:37.514444938Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=69.9µs grafana | logger=migrator t=2024-05-23T17:02:37.522202157Z level=info msg="Executing migration" id="create test_data table" grafana | logger=migrator t=2024-05-23T17:02:37.52366539Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.462093ms grafana | logger=migrator t=2024-05-23T17:02:37.530067476Z level=info msg="Executing migration" id="create dashboard_version table v1" grafana | logger=migrator t=2024-05-23T17:02:37.530983804Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=915.948µs grafana | logger=migrator t=2024-05-23T17:02:37.534261883Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" grafana | logger=migrator t=2024-05-23T17:02:37.535644955Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=1.385752ms grafana | logger=migrator t=2024-05-23T17:02:37.539273867Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" grafana | logger=migrator t=2024-05-23T17:02:37.54076359Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=1.490123ms grafana | logger=migrator t=2024-05-23T17:02:37.545901355Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" grafana | logger=migrator t=2024-05-23T17:02:37.546364709Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=469.734µs grafana | logger=migrator t=2024-05-23T17:02:37.54992173Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" grafana | logger=migrator t=2024-05-23T17:02:37.550288304Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=367.014µs grafana | logger=migrator t=2024-05-23T17:02:37.552453263Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" grafana | logger=migrator t=2024-05-23T17:02:37.552515723Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=62.07µs grafana | logger=migrator t=2024-05-23T17:02:37.555459169Z level=info msg="Executing migration" id="create team table" grafana | logger=migrator t=2024-05-23T17:02:37.556405768Z level=info msg="Migration successfully executed" id="create team table" duration=946.519µs grafana | logger=migrator t=2024-05-23T17:02:37.566686348Z level=info msg="Executing migration" id="add index team.org_id" grafana | logger=migrator t=2024-05-23T17:02:37.568856877Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=2.166669ms grafana | logger=migrator t=2024-05-23T17:02:37.593968898Z level=info msg="Executing migration" id="add unique index team_org_id_name" grafana | logger=migrator t=2024-05-23T17:02:37.595559792Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.597594ms grafana | logger=migrator t=2024-05-23T17:02:37.602916086Z level=info msg="Executing migration" id="Add column uid in team" grafana | logger=migrator t=2024-05-23T17:02:37.608923679Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=6.012533ms grafana | logger=migrator t=2024-05-23T17:02:37.612754433Z level=info msg="Executing migration" id="Update uid column values in team" grafana | logger=migrator t=2024-05-23T17:02:37.612931545Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=177.132µs grafana | logger=migrator t=2024-05-23T17:02:37.617755267Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" grafana | logger=migrator t=2024-05-23T17:02:37.618695325Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=939.128µs grafana | logger=migrator t=2024-05-23T17:02:37.622105295Z level=info msg="Executing migration" id="create team member table" grafana | logger=migrator t=2024-05-23T17:02:37.622946703Z level=info msg="Migration successfully executed" id="create team member table" duration=840.658µs grafana | logger=migrator t=2024-05-23T17:02:37.628134828Z level=info msg="Executing migration" id="add index team_member.org_id" grafana | logger=migrator t=2024-05-23T17:02:37.629039496Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=907.398µs grafana | logger=migrator t=2024-05-23T17:02:37.632394706Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" grafana | logger=migrator t=2024-05-23T17:02:37.633494396Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=1.098709ms grafana | logger=migrator t=2024-05-23T17:02:37.636852855Z level=info msg="Executing migration" id="add index team_member.team_id" grafana | logger=migrator t=2024-05-23T17:02:37.637758733Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=902.148µs grafana | logger=migrator t=2024-05-23T17:02:37.64541266Z level=info msg="Executing migration" id="Add column email to team table" grafana | logger=migrator t=2024-05-23T17:02:37.650909088Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=5.500378ms grafana | logger=migrator t=2024-05-23T17:02:37.657066423Z level=info msg="Executing migration" id="Add column external to team_member table" grafana | logger=migrator t=2024-05-23T17:02:37.660342202Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=3.275469ms grafana | logger=migrator t=2024-05-23T17:02:37.666031232Z level=info msg="Executing migration" id="Add column permission to team_member table" grafana | logger=migrator t=2024-05-23T17:02:37.671054906Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=5.023484ms grafana | logger=migrator t=2024-05-23T17:02:37.674500156Z level=info msg="Executing migration" id="create dashboard acl table" grafana | logger=migrator t=2024-05-23T17:02:37.675565105Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=1.064439ms grafana | logger=migrator t=2024-05-23T17:02:37.683030761Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" grafana | logger=migrator t=2024-05-23T17:02:37.684893348Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.866117ms grafana | logger=migrator t=2024-05-23T17:02:37.691497465Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" grafana | logger=migrator t=2024-05-23T17:02:37.692811437Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.319052ms grafana | logger=migrator t=2024-05-23T17:02:37.69885621Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" grafana | logger=migrator t=2024-05-23T17:02:37.700448454Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=1.593974ms grafana | logger=migrator t=2024-05-23T17:02:37.705157436Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" grafana | logger=migrator t=2024-05-23T17:02:37.70682158Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=1.664275ms grafana | logger=migrator t=2024-05-23T17:02:37.710417602Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" grafana | logger=migrator t=2024-05-23T17:02:37.711513902Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.09627ms grafana | logger=migrator t=2024-05-23T17:02:37.717329343Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" grafana | logger=migrator t=2024-05-23T17:02:37.718960867Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=1.631264ms grafana | logger=migrator t=2024-05-23T17:02:37.722919972Z level=info msg="Executing migration" id="add index dashboard_permission" grafana | logger=migrator t=2024-05-23T17:02:37.724594266Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.674224ms grafana | logger=migrator t=2024-05-23T17:02:37.727888916Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" grafana | logger=migrator t=2024-05-23T17:02:37.728504051Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=635.966µs grafana | logger=migrator t=2024-05-23T17:02:37.734781016Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" grafana | logger=migrator t=2024-05-23T17:02:37.735316041Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=534.315µs grafana | logger=migrator t=2024-05-23T17:02:37.741464145Z level=info msg="Executing migration" id="create tag table" grafana | logger=migrator t=2024-05-23T17:02:37.742606115Z level=info msg="Migration successfully executed" id="create tag table" duration=1.14282ms grafana | logger=migrator t=2024-05-23T17:02:37.748212444Z level=info msg="Executing migration" id="add index tag.key_value" grafana | logger=migrator t=2024-05-23T17:02:37.749322654Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.11027ms grafana | logger=migrator t=2024-05-23T17:02:37.756045683Z level=info msg="Executing migration" id="create login attempt table" grafana | logger=migrator t=2024-05-23T17:02:37.756921361Z level=info msg="Migration successfully executed" id="create login attempt table" duration=875.208µs grafana | logger=migrator t=2024-05-23T17:02:37.761289909Z level=info msg="Executing migration" id="add index login_attempt.username" grafana | logger=migrator t=2024-05-23T17:02:37.762689712Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=1.400283ms grafana | logger=migrator t=2024-05-23T17:02:37.766957339Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" grafana | logger=migrator t=2024-05-23T17:02:37.767844417Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=887.098µs grafana | logger=migrator t=2024-05-23T17:02:37.77387863Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" grafana | logger=migrator t=2024-05-23T17:02:37.789107084Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=15.229184ms grafana | logger=migrator t=2024-05-23T17:02:37.792789176Z level=info msg="Executing migration" id="create login_attempt v2" grafana | logger=migrator t=2024-05-23T17:02:37.793591023Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=801.407µs grafana | logger=migrator t=2024-05-23T17:02:37.830186595Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" grafana | logger=migrator t=2024-05-23T17:02:37.831679658Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=1.493373ms grafana | logger=migrator t=2024-05-23T17:02:37.837897513Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" grafana | logger=migrator t=2024-05-23T17:02:37.838328977Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=431.484µs grafana | logger=migrator t=2024-05-23T17:02:37.841758677Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" grafana | logger=migrator t=2024-05-23T17:02:37.842642835Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=882.998µs grafana | logger=migrator t=2024-05-23T17:02:37.847208675Z level=info msg="Executing migration" id="create user auth table" grafana | logger=migrator t=2024-05-23T17:02:37.848020532Z level=info msg="Migration successfully executed" id="create user auth table" duration=811.597µs grafana | logger=migrator t=2024-05-23T17:02:37.851563553Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" grafana | logger=migrator t=2024-05-23T17:02:37.852665203Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.10032ms grafana | logger=migrator t=2024-05-23T17:02:37.857801898Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" grafana | logger=migrator t=2024-05-23T17:02:37.857900249Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=99.931µs grafana | logger=migrator t=2024-05-23T17:02:37.862058876Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" grafana | logger=migrator t=2024-05-23T17:02:37.867469703Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=5.410488ms grafana | logger=migrator t=2024-05-23T17:02:37.870778982Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" grafana | logger=migrator t=2024-05-23T17:02:37.876410822Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=5.63116ms grafana | logger=migrator t=2024-05-23T17:02:37.881675918Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" grafana | logger=migrator t=2024-05-23T17:02:37.886968915Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=5.292877ms grafana | logger=migrator t=2024-05-23T17:02:37.891433594Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" grafana | logger=migrator t=2024-05-23T17:02:37.89671851Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=5.284596ms grafana | logger=migrator t=2024-05-23T17:02:37.900942217Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" grafana | logger=migrator t=2024-05-23T17:02:37.901884716Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=942.419µs grafana | logger=migrator t=2024-05-23T17:02:37.904989413Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" grafana | logger=migrator t=2024-05-23T17:02:37.912346868Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=7.352685ms grafana | logger=migrator t=2024-05-23T17:02:37.916960988Z level=info msg="Executing migration" id="create server_lock table" grafana | logger=migrator t=2024-05-23T17:02:37.917797126Z level=info msg="Migration successfully executed" id="create server_lock table" duration=835.928µs grafana | logger=migrator t=2024-05-23T17:02:37.921107735Z level=info msg="Executing migration" id="add index server_lock.operation_uid" grafana | logger=migrator t=2024-05-23T17:02:37.922041453Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=930.638µs grafana | logger=migrator t=2024-05-23T17:02:37.925377322Z level=info msg="Executing migration" id="create user auth token table" grafana | logger=migrator t=2024-05-23T17:02:37.92627869Z level=info msg="Migration successfully executed" id="create user auth token table" duration=901.288µs grafana | logger=migrator t=2024-05-23T17:02:37.930413367Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" grafana | logger=migrator t=2024-05-23T17:02:37.931333985Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=920.488µs grafana | logger=migrator t=2024-05-23T17:02:37.934854696Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" grafana | logger=migrator t=2024-05-23T17:02:37.936280298Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.425002ms grafana | logger=migrator t=2024-05-23T17:02:37.940091982Z level=info msg="Executing migration" id="add index user_auth_token.user_id" grafana | logger=migrator t=2024-05-23T17:02:37.941654656Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.558773ms grafana | logger=migrator t=2024-05-23T17:02:37.970283207Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" grafana | logger=migrator t=2024-05-23T17:02:37.9785808Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=8.298663ms grafana | logger=migrator t=2024-05-23T17:02:37.981739868Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" grafana | logger=migrator t=2024-05-23T17:02:37.982712736Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=972.528µs grafana | logger=migrator t=2024-05-23T17:02:37.98651674Z level=info msg="Executing migration" id="create cache_data table" grafana | logger=migrator t=2024-05-23T17:02:37.987394538Z level=info msg="Migration successfully executed" id="create cache_data table" duration=877.418µs grafana | logger=migrator t=2024-05-23T17:02:37.991790126Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" grafana | logger=migrator t=2024-05-23T17:02:37.992725185Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=935.089µs grafana | logger=migrator t=2024-05-23T17:02:37.996349747Z level=info msg="Executing migration" id="create short_url table v1" grafana | logger=migrator t=2024-05-23T17:02:37.997209664Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=856.267µs grafana | logger=migrator t=2024-05-23T17:02:38.000596384Z level=info msg="Executing migration" id="add index short_url.org_id-uid" grafana | logger=migrator t=2024-05-23T17:02:38.001603772Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=1.006838ms grafana | logger=migrator t=2024-05-23T17:02:38.00814005Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" grafana | logger=migrator t=2024-05-23T17:02:38.008209281Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=69.871µs grafana | logger=migrator t=2024-05-23T17:02:38.013212805Z level=info msg="Executing migration" id="delete alert_definition table" grafana | logger=migrator t=2024-05-23T17:02:38.013295995Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=83.58µs grafana | logger=migrator t=2024-05-23T17:02:38.01607348Z level=info msg="Executing migration" id="recreate alert_definition table" grafana | logger=migrator t=2024-05-23T17:02:38.017396981Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.323421ms grafana | logger=migrator t=2024-05-23T17:02:38.022000062Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" grafana | logger=migrator t=2024-05-23T17:02:38.023023271Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.023609ms grafana | logger=migrator t=2024-05-23T17:02:38.027678952Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2024-05-23T17:02:38.028703871Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.024659ms grafana | logger=migrator t=2024-05-23T17:02:38.063220504Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" grafana | logger=migrator t=2024-05-23T17:02:38.063319195Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=100.581µs grafana | logger=migrator t=2024-05-23T17:02:38.069440129Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" grafana | logger=migrator t=2024-05-23T17:02:38.070995263Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.561044ms grafana | logger=migrator t=2024-05-23T17:02:38.074321312Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2024-05-23T17:02:38.075691534Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.369812ms grafana | logger=migrator t=2024-05-23T17:02:38.078863312Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" grafana | logger=migrator t=2024-05-23T17:02:38.079905411Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.042149ms grafana | logger=migrator t=2024-05-23T17:02:38.084620542Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2024-05-23T17:02:38.085637991Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.017269ms grafana | logger=migrator t=2024-05-23T17:02:38.088739088Z level=info msg="Executing migration" id="Add column paused in alert_definition" grafana | logger=migrator t=2024-05-23T17:02:38.093113897Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=4.375249ms grafana | logger=migrator t=2024-05-23T17:02:38.099722305Z level=info msg="Executing migration" id="drop alert_definition table" grafana | logger=migrator t=2024-05-23T17:02:38.100828055Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=1.10615ms grafana | logger=migrator t=2024-05-23T17:02:38.107593744Z level=info msg="Executing migration" id="delete alert_definition_version table" grafana | logger=migrator t=2024-05-23T17:02:38.107754216Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=160.961µs grafana | logger=migrator t=2024-05-23T17:02:38.110869003Z level=info msg="Executing migration" id="recreate alert_definition_version table" grafana | logger=migrator t=2024-05-23T17:02:38.111956653Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.09144ms grafana | logger=migrator t=2024-05-23T17:02:38.122232443Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" grafana | logger=migrator t=2024-05-23T17:02:38.123114171Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=881.698µs grafana | logger=migrator t=2024-05-23T17:02:38.150447211Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" grafana | logger=migrator t=2024-05-23T17:02:38.152916212Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=2.468601ms grafana | logger=migrator t=2024-05-23T17:02:38.156917148Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" grafana | logger=migrator t=2024-05-23T17:02:38.157018339Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=99.04µs grafana | logger=migrator t=2024-05-23T17:02:38.162132233Z level=info msg="Executing migration" id="drop alert_definition_version table" grafana | logger=migrator t=2024-05-23T17:02:38.163427145Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.295582ms grafana | logger=migrator t=2024-05-23T17:02:38.197841167Z level=info msg="Executing migration" id="create alert_instance table" grafana | logger=migrator t=2024-05-23T17:02:38.199163339Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.324712ms grafana | logger=migrator t=2024-05-23T17:02:38.203701729Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" grafana | logger=migrator t=2024-05-23T17:02:38.204727898Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.025819ms grafana | logger=migrator t=2024-05-23T17:02:38.209700151Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" grafana | logger=migrator t=2024-05-23T17:02:38.21068634Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=986.169µs grafana | logger=migrator t=2024-05-23T17:02:38.215587703Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" grafana | logger=migrator t=2024-05-23T17:02:38.221277663Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=5.68972ms grafana | logger=migrator t=2024-05-23T17:02:38.22431768Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" grafana | logger=migrator t=2024-05-23T17:02:38.225246228Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=928.318µs grafana | logger=migrator t=2024-05-23T17:02:38.229796768Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" grafana | logger=migrator t=2024-05-23T17:02:38.230687386Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=890.908µs grafana | logger=migrator t=2024-05-23T17:02:38.234211477Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" grafana | logger=migrator t=2024-05-23T17:02:38.261762919Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=27.536792ms grafana | logger=migrator t=2024-05-23T17:02:38.273525902Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" grafana | logger=migrator t=2024-05-23T17:02:38.299790543Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=26.262291ms grafana | logger=migrator t=2024-05-23T17:02:38.306401751Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" grafana | logger=migrator t=2024-05-23T17:02:38.30742871Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=1.026299ms grafana | logger=migrator t=2024-05-23T17:02:38.310918091Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" grafana | logger=migrator t=2024-05-23T17:02:38.312302253Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=1.382592ms grafana | logger=migrator t=2024-05-23T17:02:38.317180046Z level=info msg="Executing migration" id="add current_reason column related to current_state" grafana | logger=migrator t=2024-05-23T17:02:38.324934454Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=7.754678ms grafana | logger=migrator t=2024-05-23T17:02:38.329250472Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" grafana | logger=migrator t=2024-05-23T17:02:38.335254325Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=6.006953ms grafana | logger=migrator t=2024-05-23T17:02:38.338996738Z level=info msg="Executing migration" id="create alert_rule table" grafana | logger=migrator t=2024-05-23T17:02:38.340133888Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.1375ms grafana | logger=migrator t=2024-05-23T17:02:38.345740167Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" grafana | logger=migrator t=2024-05-23T17:02:38.346740296Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.002679ms grafana | logger=migrator t=2024-05-23T17:02:38.353403954Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" grafana | logger=migrator t=2024-05-23T17:02:38.355661494Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=2.25871ms grafana | logger=migrator t=2024-05-23T17:02:38.360475906Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" grafana | logger=migrator t=2024-05-23T17:02:38.361596326Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.12043ms grafana | logger=migrator t=2024-05-23T17:02:38.367458128Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" grafana | logger=migrator t=2024-05-23T17:02:38.367554589Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=97.931µs grafana | logger=migrator t=2024-05-23T17:02:38.372495242Z level=info msg="Executing migration" id="add column for to alert_rule" grafana | logger=migrator t=2024-05-23T17:02:38.380785555Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=8.290263ms grafana | logger=migrator t=2024-05-23T17:02:38.386568836Z level=info msg="Executing migration" id="add column annotations to alert_rule" grafana | logger=migrator t=2024-05-23T17:02:38.394140072Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=7.573767ms grafana | logger=migrator t=2024-05-23T17:02:38.40298788Z level=info msg="Executing migration" id="add column labels to alert_rule" grafana | logger=migrator t=2024-05-23T17:02:38.410913079Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=7.923699ms grafana | logger=migrator t=2024-05-23T17:02:38.416531839Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" grafana | logger=migrator t=2024-05-23T17:02:38.417565118Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=1.034689ms grafana | logger=migrator t=2024-05-23T17:02:38.422558492Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" grafana | logger=migrator t=2024-05-23T17:02:38.423712972Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.15433ms grafana | logger=migrator t=2024-05-23T17:02:38.429346861Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" grafana | logger=migrator t=2024-05-23T17:02:38.435905739Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=6.558178ms grafana | logger=migrator t=2024-05-23T17:02:38.442484007Z level=info msg="Executing migration" id="add panel_id column to alert_rule" grafana | logger=migrator t=2024-05-23T17:02:38.44966599Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=7.180493ms grafana | logger=migrator t=2024-05-23T17:02:38.454721144Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" grafana | logger=migrator t=2024-05-23T17:02:38.456308318Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=1.587074ms grafana | logger=migrator t=2024-05-23T17:02:38.462894936Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" grafana | logger=migrator t=2024-05-23T17:02:38.47015589Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=7.257694ms grafana | logger=migrator t=2024-05-23T17:02:38.525141943Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" grafana | logger=migrator t=2024-05-23T17:02:38.532931672Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=7.795659ms grafana | logger=migrator t=2024-05-23T17:02:38.540332277Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" grafana | logger=migrator t=2024-05-23T17:02:38.540554929Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=230.922µs grafana | logger=migrator t=2024-05-23T17:02:38.544577404Z level=info msg="Executing migration" id="create alert_rule_version table" grafana | logger=migrator t=2024-05-23T17:02:38.546149498Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.573404ms grafana | logger=migrator t=2024-05-23T17:02:38.550256334Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" grafana | logger=migrator t=2024-05-23T17:02:38.551830158Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.575194ms grafana | logger=migrator t=2024-05-23T17:02:38.556967133Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" grafana | logger=migrator t=2024-05-23T17:02:38.559635737Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=2.670363ms grafana | logger=migrator t=2024-05-23T17:02:38.563584741Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" grafana | logger=migrator t=2024-05-23T17:02:38.563712492Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=130.921µs grafana | logger=migrator t=2024-05-23T17:02:38.567578566Z level=info msg="Executing migration" id="add column for to alert_rule_version" grafana | logger=migrator t=2024-05-23T17:02:38.575184523Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=7.602397ms grafana | logger=migrator t=2024-05-23T17:02:38.580882193Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" grafana | logger=migrator t=2024-05-23T17:02:38.590576798Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=9.688935ms grafana | logger=migrator t=2024-05-23T17:02:38.59759968Z level=info msg="Executing migration" id="add column labels to alert_rule_version" grafana | logger=migrator t=2024-05-23T17:02:38.607659309Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=10.066219ms grafana | logger=migrator t=2024-05-23T17:02:38.611378781Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" grafana | logger=migrator t=2024-05-23T17:02:38.616569827Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=5.158656ms grafana | logger=migrator t=2024-05-23T17:02:38.622397718Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" grafana | logger=migrator t=2024-05-23T17:02:38.630734221Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=8.339693ms grafana | logger=migrator t=2024-05-23T17:02:38.636135989Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" grafana | logger=migrator t=2024-05-23T17:02:38.636458022Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=321.553µs grafana | logger=migrator t=2024-05-23T17:02:38.645473331Z level=info msg="Executing migration" id=create_alert_configuration_table grafana | logger=migrator t=2024-05-23T17:02:38.646842173Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=1.368152ms grafana | logger=migrator t=2024-05-23T17:02:38.651936057Z level=info msg="Executing migration" id="Add column default in alert_configuration" grafana | logger=migrator t=2024-05-23T17:02:38.660891826Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=8.953909ms grafana | logger=migrator t=2024-05-23T17:02:38.664411407Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" grafana | logger=migrator t=2024-05-23T17:02:38.664615089Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=215.362µs grafana | logger=migrator t=2024-05-23T17:02:38.66929522Z level=info msg="Executing migration" id="add column org_id in alert_configuration" grafana | logger=migrator t=2024-05-23T17:02:38.676396993Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=7.099563ms grafana | logger=migrator t=2024-05-23T17:02:38.683319923Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" grafana | logger=migrator t=2024-05-23T17:02:38.684346182Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=1.032209ms grafana | logger=migrator t=2024-05-23T17:02:38.691597216Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" grafana | logger=migrator t=2024-05-23T17:02:38.696280527Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=4.682891ms grafana | logger=migrator t=2024-05-23T17:02:38.700472124Z level=info msg="Executing migration" id=create_ngalert_configuration_table grafana | logger=migrator t=2024-05-23T17:02:38.701052279Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=577.165µs grafana | logger=migrator t=2024-05-23T17:02:38.704061206Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" grafana | logger=migrator t=2024-05-23T17:02:38.704770582Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=709.106µs grafana | logger=migrator t=2024-05-23T17:02:38.708067231Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" grafana | logger=migrator t=2024-05-23T17:02:38.717857197Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=9.790756ms grafana | logger=migrator t=2024-05-23T17:02:38.760614633Z level=info msg="Executing migration" id="create provenance_type table" grafana | logger=migrator t=2024-05-23T17:02:38.763170265Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=2.561452ms grafana | logger=migrator t=2024-05-23T17:02:38.770921883Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" grafana | logger=migrator t=2024-05-23T17:02:38.77286976Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.947027ms grafana | logger=migrator t=2024-05-23T17:02:38.777561802Z level=info msg="Executing migration" id="create alert_image table" grafana | logger=migrator t=2024-05-23T17:02:38.779079145Z level=info msg="Migration successfully executed" id="create alert_image table" duration=1.519654ms grafana | logger=migrator t=2024-05-23T17:02:38.784789815Z level=info msg="Executing migration" id="add unique index on token to alert_image table" grafana | logger=migrator t=2024-05-23T17:02:38.785957735Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.16672ms grafana | logger=migrator t=2024-05-23T17:02:38.790689787Z level=info msg="Executing migration" id="support longer URLs in alert_image table" grafana | logger=migrator t=2024-05-23T17:02:38.790772808Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=84.901µs grafana | logger=migrator t=2024-05-23T17:02:38.795329508Z level=info msg="Executing migration" id=create_alert_configuration_history_table grafana | logger=migrator t=2024-05-23T17:02:38.796463218Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=1.13353ms grafana | logger=migrator t=2024-05-23T17:02:38.801929636Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" grafana | logger=migrator t=2024-05-23T17:02:38.804395737Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=2.469521ms grafana | logger=migrator t=2024-05-23T17:02:38.808612584Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" grafana | logger=migrator t=2024-05-23T17:02:38.809018228Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" grafana | logger=migrator t=2024-05-23T17:02:38.813434617Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" grafana | logger=migrator t=2024-05-23T17:02:38.814291014Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=856.387µs grafana | logger=migrator t=2024-05-23T17:02:38.823310523Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" grafana | logger=migrator t=2024-05-23T17:02:38.825433292Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=2.123619ms grafana | logger=migrator t=2024-05-23T17:02:38.829394257Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" grafana | logger=migrator t=2024-05-23T17:02:38.837446738Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=8.052621ms grafana | logger=migrator t=2024-05-23T17:02:38.841592624Z level=info msg="Executing migration" id="create library_element table v1" grafana | logger=migrator t=2024-05-23T17:02:38.842707954Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=1.11515ms grafana | logger=migrator t=2024-05-23T17:02:38.848475035Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" grafana | logger=migrator t=2024-05-23T17:02:38.850276811Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.801816ms grafana | logger=migrator t=2024-05-23T17:02:38.854435807Z level=info msg="Executing migration" id="create library_element_connection table v1" grafana | logger=migrator t=2024-05-23T17:02:38.855744819Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=1.313042ms grafana | logger=migrator t=2024-05-23T17:02:38.860117537Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" grafana | logger=migrator t=2024-05-23T17:02:38.861217017Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.09963ms grafana | logger=migrator t=2024-05-23T17:02:38.869278927Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" grafana | logger=migrator t=2024-05-23T17:02:38.870353957Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.07581ms grafana | logger=migrator t=2024-05-23T17:02:38.873472314Z level=info msg="Executing migration" id="increase max description length to 2048" grafana | logger=migrator t=2024-05-23T17:02:38.873508105Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=39.941µs grafana | logger=migrator t=2024-05-23T17:02:38.878235946Z level=info msg="Executing migration" id="alter library_element model to mediumtext" grafana | logger=migrator t=2024-05-23T17:02:38.878358907Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=119.781µs grafana | logger=migrator t=2024-05-23T17:02:38.914324943Z level=info msg="Executing migration" id="add library_element folder uid" grafana | logger=migrator t=2024-05-23T17:02:38.922775658Z level=info msg="Migration successfully executed" id="add library_element folder uid" duration=8.454684ms grafana | logger=migrator t=2024-05-23T17:02:38.927275227Z level=info msg="Executing migration" id="populate library_element folder_uid" grafana | logger=migrator t=2024-05-23T17:02:38.92759537Z level=info msg="Migration successfully executed" id="populate library_element folder_uid" duration=320.763µs grafana | logger=migrator t=2024-05-23T17:02:38.930481715Z level=info msg="Executing migration" id="add index library_element org_id-folder_uid-name-kind" grafana | logger=migrator t=2024-05-23T17:02:38.931403444Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_uid-name-kind" duration=920.909µs grafana | logger=migrator t=2024-05-23T17:02:38.934714082Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" grafana | logger=migrator t=2024-05-23T17:02:38.935034405Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=320.153µs grafana | logger=migrator t=2024-05-23T17:02:38.940324542Z level=info msg="Executing migration" id="create data_keys table" grafana | logger=migrator t=2024-05-23T17:02:38.941522102Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.19763ms grafana | logger=migrator t=2024-05-23T17:02:38.944715091Z level=info msg="Executing migration" id="create secrets table" grafana | logger=migrator t=2024-05-23T17:02:38.945708619Z level=info msg="Migration successfully executed" id="create secrets table" duration=995.008µs grafana | logger=migrator t=2024-05-23T17:02:38.949846886Z level=info msg="Executing migration" id="rename data_keys name column to id" grafana | logger=migrator t=2024-05-23T17:02:38.983480601Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=33.604425ms grafana | logger=migrator t=2024-05-23T17:02:38.996825678Z level=info msg="Executing migration" id="add name column into data_keys" grafana | logger=migrator t=2024-05-23T17:02:39.009299438Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=12.46833ms grafana | logger=migrator t=2024-05-23T17:02:39.013691397Z level=info msg="Executing migration" id="copy data_keys id column values into name" grafana | logger=migrator t=2024-05-23T17:02:39.013956589Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=268.012µs grafana | logger=migrator t=2024-05-23T17:02:39.019263255Z level=info msg="Executing migration" id="rename data_keys name column to label" grafana | logger=migrator t=2024-05-23T17:02:39.053861989Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=34.593504ms grafana | logger=migrator t=2024-05-23T17:02:39.05736598Z level=info msg="Executing migration" id="rename data_keys id column back to name" grafana | logger=migrator t=2024-05-23T17:02:39.089976566Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=32.604616ms grafana | logger=migrator t=2024-05-23T17:02:39.099801153Z level=info msg="Executing migration" id="create kv_store table v1" grafana | logger=migrator t=2024-05-23T17:02:39.100518869Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=718.276µs grafana | logger=migrator t=2024-05-23T17:02:39.112279182Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" grafana | logger=migrator t=2024-05-23T17:02:39.114231309Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.954287ms grafana | logger=migrator t=2024-05-23T17:02:39.12565318Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" grafana | logger=migrator t=2024-05-23T17:02:39.126030033Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=378.053µs grafana | logger=migrator t=2024-05-23T17:02:39.130960586Z level=info msg="Executing migration" id="create permission table" grafana | logger=migrator t=2024-05-23T17:02:39.132138917Z level=info msg="Migration successfully executed" id="create permission table" duration=1.178001ms grafana | logger=migrator t=2024-05-23T17:02:39.138388022Z level=info msg="Executing migration" id="add unique index permission.role_id" grafana | logger=migrator t=2024-05-23T17:02:39.14045188Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=2.065028ms grafana | logger=migrator t=2024-05-23T17:02:39.143774489Z level=info msg="Executing migration" id="add unique index role_id_action_scope" grafana | logger=migrator t=2024-05-23T17:02:39.145127241Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.352462ms grafana | logger=migrator t=2024-05-23T17:02:39.150369287Z level=info msg="Executing migration" id="create role table" grafana | logger=migrator t=2024-05-23T17:02:39.151503887Z level=info msg="Migration successfully executed" id="create role table" duration=1.13405ms grafana | logger=migrator t=2024-05-23T17:02:39.157205447Z level=info msg="Executing migration" id="add column display_name" grafana | logger=migrator t=2024-05-23T17:02:39.169069781Z level=info msg="Migration successfully executed" id="add column display_name" duration=11.864674ms grafana | logger=migrator t=2024-05-23T17:02:39.173403159Z level=info msg="Executing migration" id="add column group_name" grafana | logger=migrator t=2024-05-23T17:02:39.179133769Z level=info msg="Migration successfully executed" id="add column group_name" duration=5.73011ms grafana | logger=migrator t=2024-05-23T17:02:39.183498758Z level=info msg="Executing migration" id="add index role.org_id" grafana | logger=migrator t=2024-05-23T17:02:39.184712748Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=1.19816ms grafana | logger=migrator t=2024-05-23T17:02:39.187986637Z level=info msg="Executing migration" id="add unique index role_org_id_name" grafana | logger=migrator t=2024-05-23T17:02:39.189596511Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.609104ms grafana | logger=migrator t=2024-05-23T17:02:39.248010114Z level=info msg="Executing migration" id="add index role_org_id_uid" grafana | logger=migrator t=2024-05-23T17:02:39.250373595Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=2.360201ms grafana | logger=migrator t=2024-05-23T17:02:39.255346908Z level=info msg="Executing migration" id="create team role table" grafana | logger=migrator t=2024-05-23T17:02:39.256557229Z level=info msg="Migration successfully executed" id="create team role table" duration=1.211051ms grafana | logger=migrator t=2024-05-23T17:02:39.260889647Z level=info msg="Executing migration" id="add index team_role.org_id" grafana | logger=migrator t=2024-05-23T17:02:39.262098698Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.208931ms grafana | logger=migrator t=2024-05-23T17:02:39.271236137Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" grafana | logger=migrator t=2024-05-23T17:02:39.27265436Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.416963ms grafana | logger=migrator t=2024-05-23T17:02:39.27953563Z level=info msg="Executing migration" id="add index team_role.team_id" grafana | logger=migrator t=2024-05-23T17:02:39.281750839Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=2.212839ms grafana | logger=migrator t=2024-05-23T17:02:39.286922495Z level=info msg="Executing migration" id="create user role table" grafana | logger=migrator t=2024-05-23T17:02:39.288081355Z level=info msg="Migration successfully executed" id="create user role table" duration=1.16005ms grafana | logger=migrator t=2024-05-23T17:02:39.292732896Z level=info msg="Executing migration" id="add index user_role.org_id" grafana | logger=migrator t=2024-05-23T17:02:39.293931676Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.19888ms grafana | logger=migrator t=2024-05-23T17:02:39.298281005Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" grafana | logger=migrator t=2024-05-23T17:02:39.299579746Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.298401ms grafana | logger=migrator t=2024-05-23T17:02:39.303858353Z level=info msg="Executing migration" id="add index user_role.user_id" grafana | logger=migrator t=2024-05-23T17:02:39.305157375Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.298922ms grafana | logger=migrator t=2024-05-23T17:02:39.332237203Z level=info msg="Executing migration" id="create builtin role table" grafana | logger=migrator t=2024-05-23T17:02:39.33422922Z level=info msg="Migration successfully executed" id="create builtin role table" duration=1.991387ms grafana | logger=migrator t=2024-05-23T17:02:39.341926248Z level=info msg="Executing migration" id="add index builtin_role.role_id" grafana | logger=migrator t=2024-05-23T17:02:39.344389659Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=2.464181ms grafana | logger=migrator t=2024-05-23T17:02:39.354283986Z level=info msg="Executing migration" id="add index builtin_role.name" grafana | logger=migrator t=2024-05-23T17:02:39.356207883Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.923837ms grafana | logger=migrator t=2024-05-23T17:02:39.362220216Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" grafana | logger=migrator t=2024-05-23T17:02:39.370706181Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=8.487165ms grafana | logger=migrator t=2024-05-23T17:02:39.374434763Z level=info msg="Executing migration" id="add index builtin_role.org_id" grafana | logger=migrator t=2024-05-23T17:02:39.376752434Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=2.317601ms grafana | logger=migrator t=2024-05-23T17:02:39.382822687Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" grafana | logger=migrator t=2024-05-23T17:02:39.384127679Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.305212ms grafana | logger=migrator t=2024-05-23T17:02:39.389390195Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" grafana | logger=migrator t=2024-05-23T17:02:39.390484034Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.093629ms grafana | logger=migrator t=2024-05-23T17:02:39.395737231Z level=info msg="Executing migration" id="add unique index role.uid" grafana | logger=migrator t=2024-05-23T17:02:39.396971141Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.23397ms grafana | logger=migrator t=2024-05-23T17:02:39.401675002Z level=info msg="Executing migration" id="create seed assignment table" grafana | logger=migrator t=2024-05-23T17:02:39.402620381Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=944.589µs grafana | logger=migrator t=2024-05-23T17:02:39.418869114Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" grafana | logger=migrator t=2024-05-23T17:02:39.420795221Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.926156ms grafana | logger=migrator t=2024-05-23T17:02:39.427941673Z level=info msg="Executing migration" id="add column hidden to role table" grafana | logger=migrator t=2024-05-23T17:02:39.43672476Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=8.783667ms grafana | logger=migrator t=2024-05-23T17:02:39.441454312Z level=info msg="Executing migration" id="permission kind migration" grafana | logger=migrator t=2024-05-23T17:02:39.450199429Z level=info msg="Migration successfully executed" id="permission kind migration" duration=8.744697ms grafana | logger=migrator t=2024-05-23T17:02:39.47875146Z level=info msg="Executing migration" id="permission attribute migration" grafana | logger=migrator t=2024-05-23T17:02:39.490594444Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=11.843174ms grafana | logger=migrator t=2024-05-23T17:02:39.513693686Z level=info msg="Executing migration" id="permission identifier migration" grafana | logger=migrator t=2024-05-23T17:02:39.52545747Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=11.806414ms grafana | logger=migrator t=2024-05-23T17:02:39.532633162Z level=info msg="Executing migration" id="add permission identifier index" grafana | logger=migrator t=2024-05-23T17:02:39.533869613Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.231301ms grafana | logger=migrator t=2024-05-23T17:02:39.539998127Z level=info msg="Executing migration" id="add permission action scope role_id index" grafana | logger=migrator t=2024-05-23T17:02:39.541854854Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.856327ms grafana | logger=migrator t=2024-05-23T17:02:39.547278541Z level=info msg="Executing migration" id="remove permission role_id action scope index" grafana | logger=migrator t=2024-05-23T17:02:39.548500362Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.222061ms grafana | logger=migrator t=2024-05-23T17:02:39.553756938Z level=info msg="Executing migration" id="create query_history table v1" grafana | logger=migrator t=2024-05-23T17:02:39.555400282Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=1.642904ms grafana | logger=migrator t=2024-05-23T17:02:39.562399704Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" grafana | logger=migrator t=2024-05-23T17:02:39.563921877Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.522183ms grafana | logger=migrator t=2024-05-23T17:02:39.569415595Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" grafana | logger=migrator t=2024-05-23T17:02:39.569648858Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=232.582µs grafana | logger=migrator t=2024-05-23T17:02:39.575292037Z level=info msg="Executing migration" id="rbac disabled migrator" grafana | logger=migrator t=2024-05-23T17:02:39.575373378Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=84.001µs grafana | logger=migrator t=2024-05-23T17:02:39.581055008Z level=info msg="Executing migration" id="teams permissions migration" grafana | logger=migrator t=2024-05-23T17:02:39.582066077Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=1.011029ms grafana | logger=migrator t=2024-05-23T17:02:39.585693168Z level=info msg="Executing migration" id="dashboard permissions" grafana | logger=migrator t=2024-05-23T17:02:39.586425965Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=735.517µs grafana | logger=migrator t=2024-05-23T17:02:39.590880444Z level=info msg="Executing migration" id="dashboard permissions uid scopes" grafana | logger=migrator t=2024-05-23T17:02:39.591797302Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=918.678µs grafana | logger=migrator t=2024-05-23T17:02:39.596182671Z level=info msg="Executing migration" id="drop managed folder create actions" grafana | logger=migrator t=2024-05-23T17:02:39.596527494Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=344.773µs grafana | logger=migrator t=2024-05-23T17:02:39.600556659Z level=info msg="Executing migration" id="alerting notification permissions" grafana | logger=migrator t=2024-05-23T17:02:39.601143734Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=586.655µs grafana | logger=migrator t=2024-05-23T17:02:39.605324171Z level=info msg="Executing migration" id="create query_history_star table v1" grafana | logger=migrator t=2024-05-23T17:02:39.60633803Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=1.013549ms grafana | logger=migrator t=2024-05-23T17:02:39.61096604Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" grafana | logger=migrator t=2024-05-23T17:02:39.612637315Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.670465ms grafana | logger=migrator t=2024-05-23T17:02:39.617478948Z level=info msg="Executing migration" id="add column org_id in query_history_star" grafana | logger=migrator t=2024-05-23T17:02:39.626284905Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=8.801187ms grafana | logger=migrator t=2024-05-23T17:02:39.63024667Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" grafana | logger=migrator t=2024-05-23T17:02:39.63033081Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=81.68µs grafana | logger=migrator t=2024-05-23T17:02:39.635524706Z level=info msg="Executing migration" id="create correlation table v1" grafana | logger=migrator t=2024-05-23T17:02:39.636494624Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=967.398µs grafana | logger=migrator t=2024-05-23T17:02:39.644231263Z level=info msg="Executing migration" id="add index correlations.uid" grafana | logger=migrator t=2024-05-23T17:02:39.646349011Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=2.114698ms grafana | logger=migrator t=2024-05-23T17:02:39.652147872Z level=info msg="Executing migration" id="add index correlations.source_uid" grafana | logger=migrator t=2024-05-23T17:02:39.653420323Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.271931ms grafana | logger=migrator t=2024-05-23T17:02:39.657795232Z level=info msg="Executing migration" id="add correlation config column" grafana | logger=migrator t=2024-05-23T17:02:39.669969038Z level=info msg="Migration successfully executed" id="add correlation config column" duration=12.173576ms grafana | logger=migrator t=2024-05-23T17:02:39.703742465Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" grafana | logger=migrator t=2024-05-23T17:02:39.705669792Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.929557ms grafana | logger=migrator t=2024-05-23T17:02:39.737960216Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" grafana | logger=migrator t=2024-05-23T17:02:39.739798292Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.839046ms grafana | logger=migrator t=2024-05-23T17:02:39.744816866Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" grafana | logger=migrator t=2024-05-23T17:02:39.770513121Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=25.690025ms grafana | logger=migrator t=2024-05-23T17:02:39.775243153Z level=info msg="Executing migration" id="create correlation v2" grafana | logger=migrator t=2024-05-23T17:02:39.776238302Z level=info msg="Migration successfully executed" id="create correlation v2" duration=994.939µs grafana | logger=migrator t=2024-05-23T17:02:39.780047305Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" grafana | logger=migrator t=2024-05-23T17:02:39.78174512Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.695905ms grafana | logger=migrator t=2024-05-23T17:02:39.787113867Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" grafana | logger=migrator t=2024-05-23T17:02:39.789031834Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.918157ms grafana | logger=migrator t=2024-05-23T17:02:39.800428644Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" grafana | logger=migrator t=2024-05-23T17:02:39.801879067Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.452653ms grafana | logger=migrator t=2024-05-23T17:02:39.806942911Z level=info msg="Executing migration" id="copy correlation v1 to v2" grafana | logger=migrator t=2024-05-23T17:02:39.807220764Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=277.713µs grafana | logger=migrator t=2024-05-23T17:02:39.811735924Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" grafana | logger=migrator t=2024-05-23T17:02:39.813704171Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=1.964767ms grafana | logger=migrator t=2024-05-23T17:02:39.821114806Z level=info msg="Executing migration" id="add provisioning column" grafana | logger=migrator t=2024-05-23T17:02:39.831065053Z level=info msg="Migration successfully executed" id="add provisioning column" duration=9.948637ms grafana | logger=migrator t=2024-05-23T17:02:39.839989421Z level=info msg="Executing migration" id="create entity_events table" grafana | logger=migrator t=2024-05-23T17:02:39.841259273Z level=info msg="Migration successfully executed" id="create entity_events table" duration=1.267712ms grafana | logger=migrator t=2024-05-23T17:02:39.846514149Z level=info msg="Executing migration" id="create dashboard public config v1" grafana | logger=migrator t=2024-05-23T17:02:39.848279164Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.764345ms grafana | logger=migrator t=2024-05-23T17:02:39.854789071Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2024-05-23T17:02:39.855404827Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2024-05-23T17:02:39.860448211Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2024-05-23T17:02:39.86149588Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2024-05-23T17:02:39.873346494Z level=info msg="Executing migration" id="Drop old dashboard public config table" grafana | logger=migrator t=2024-05-23T17:02:39.874335513Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=985.849µs grafana | logger=migrator t=2024-05-23T17:02:39.883255332Z level=info msg="Executing migration" id="recreate dashboard public config v1" grafana | logger=migrator t=2024-05-23T17:02:39.884142069Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=886.707µs grafana | logger=migrator t=2024-05-23T17:02:39.89110638Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2024-05-23T17:02:39.893027807Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.921917ms grafana | logger=migrator t=2024-05-23T17:02:39.897244324Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2024-05-23T17:02:39.899407923Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=2.164469ms grafana | logger=migrator t=2024-05-23T17:02:39.90471641Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" grafana | logger=migrator t=2024-05-23T17:02:39.906379685Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.663435ms grafana | logger=migrator t=2024-05-23T17:02:39.950463572Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" grafana | logger=migrator t=2024-05-23T17:02:39.952629461Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=2.163779ms grafana | logger=migrator t=2024-05-23T17:02:39.96058711Z level=info msg="Executing migration" id="Drop public config table" grafana | logger=migrator t=2024-05-23T17:02:39.961858062Z level=info msg="Migration successfully executed" id="Drop public config table" duration=1.271292ms grafana | logger=migrator t=2024-05-23T17:02:39.968161467Z level=info msg="Executing migration" id="Recreate dashboard public config v2" grafana | logger=migrator t=2024-05-23T17:02:39.969221576Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.063129ms grafana | logger=migrator t=2024-05-23T17:02:39.977740291Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" grafana | logger=migrator t=2024-05-23T17:02:39.979595097Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.858446ms grafana | logger=migrator t=2024-05-23T17:02:39.986271596Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" grafana | logger=migrator t=2024-05-23T17:02:39.987627458Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.360272ms grafana | logger=migrator t=2024-05-23T17:02:39.993747572Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" grafana | logger=migrator t=2024-05-23T17:02:39.995038643Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.296112ms grafana | logger=migrator t=2024-05-23T17:02:39.998606104Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" grafana | logger=migrator t=2024-05-23T17:02:40.022588055Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=23.978471ms grafana | logger=migrator t=2024-05-23T17:02:40.029202003Z level=info msg="Executing migration" id="add annotations_enabled column" grafana | logger=migrator t=2024-05-23T17:02:40.040171779Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=10.976116ms grafana | logger=migrator t=2024-05-23T17:02:40.045404295Z level=info msg="Executing migration" id="add time_selection_enabled column" grafana | logger=migrator t=2024-05-23T17:02:40.054581255Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=9.17536ms grafana | logger=migrator t=2024-05-23T17:02:40.066159597Z level=info msg="Executing migration" id="delete orphaned public dashboards" grafana | logger=migrator t=2024-05-23T17:02:40.066618401Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=467.064µs grafana | logger=migrator t=2024-05-23T17:02:40.071299562Z level=info msg="Executing migration" id="add share column" grafana | logger=migrator t=2024-05-23T17:02:40.079944128Z level=info msg="Migration successfully executed" id="add share column" duration=8.637126ms grafana | logger=migrator t=2024-05-23T17:02:40.084299456Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" grafana | logger=migrator t=2024-05-23T17:02:40.084527328Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=229.022µs grafana | logger=migrator t=2024-05-23T17:02:40.087751266Z level=info msg="Executing migration" id="create file table" grafana | logger=migrator t=2024-05-23T17:02:40.088730155Z level=info msg="Migration successfully executed" id="create file table" duration=975.949µs grafana | logger=migrator t=2024-05-23T17:02:40.09386334Z level=info msg="Executing migration" id="file table idx: path natural pk" grafana | logger=migrator t=2024-05-23T17:02:40.09495571Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.0889ms grafana | logger=migrator t=2024-05-23T17:02:40.098040867Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" grafana | logger=migrator t=2024-05-23T17:02:40.099147997Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.10742ms grafana | logger=migrator t=2024-05-23T17:02:40.103780737Z level=info msg="Executing migration" id="create file_meta table" grafana | logger=migrator t=2024-05-23T17:02:40.104746936Z level=info msg="Migration successfully executed" id="create file_meta table" duration=965.879µs grafana | logger=migrator t=2024-05-23T17:02:40.109519778Z level=info msg="Executing migration" id="file table idx: path key" grafana | logger=migrator t=2024-05-23T17:02:40.111428624Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.909877ms grafana | logger=migrator t=2024-05-23T17:02:40.116416178Z level=info msg="Executing migration" id="set path collation in file table" grafana | logger=migrator t=2024-05-23T17:02:40.116486569Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=71.381µs grafana | logger=migrator t=2024-05-23T17:02:40.121485143Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" grafana | logger=migrator t=2024-05-23T17:02:40.121551383Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=65.72µs grafana | logger=migrator t=2024-05-23T17:02:40.124480139Z level=info msg="Executing migration" id="managed permissions migration" grafana | logger=migrator t=2024-05-23T17:02:40.125360757Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=881.728µs grafana | logger=migrator t=2024-05-23T17:02:40.133582159Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" grafana | logger=migrator t=2024-05-23T17:02:40.133998303Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=420.223µs grafana | logger=migrator t=2024-05-23T17:02:40.137817886Z level=info msg="Executing migration" id="RBAC action name migrator" grafana | logger=migrator t=2024-05-23T17:02:40.140315098Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=2.497672ms grafana | logger=migrator t=2024-05-23T17:02:40.185169242Z level=info msg="Executing migration" id="Add UID column to playlist" grafana | logger=migrator t=2024-05-23T17:02:40.194245171Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=9.06795ms grafana | logger=migrator t=2024-05-23T17:02:40.199073713Z level=info msg="Executing migration" id="Update uid column values in playlist" grafana | logger=migrator t=2024-05-23T17:02:40.199262825Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=190.082µs grafana | logger=migrator t=2024-05-23T17:02:40.204557582Z level=info msg="Executing migration" id="Add index for uid in playlist" grafana | logger=migrator t=2024-05-23T17:02:40.205691162Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.13363ms grafana | logger=migrator t=2024-05-23T17:02:40.210700455Z level=info msg="Executing migration" id="update group index for alert rules" grafana | logger=migrator t=2024-05-23T17:02:40.211140769Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=441.174µs grafana | logger=migrator t=2024-05-23T17:02:40.216294165Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" grafana | logger=migrator t=2024-05-23T17:02:40.216556797Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=263.982µs grafana | logger=migrator t=2024-05-23T17:02:40.220997656Z level=info msg="Executing migration" id="admin only folder/dashboard permission" grafana | logger=migrator t=2024-05-23T17:02:40.221589311Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=593.495µs grafana | logger=migrator t=2024-05-23T17:02:40.22485499Z level=info msg="Executing migration" id="add action column to seed_assignment" grafana | logger=migrator t=2024-05-23T17:02:40.233369954Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=8.509604ms grafana | logger=migrator t=2024-05-23T17:02:40.237707132Z level=info msg="Executing migration" id="add scope column to seed_assignment" grafana | logger=migrator t=2024-05-23T17:02:40.244566163Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=6.857451ms grafana | logger=migrator t=2024-05-23T17:02:40.248965691Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" grafana | logger=migrator t=2024-05-23T17:02:40.249837719Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=873.418µs grafana | logger=migrator t=2024-05-23T17:02:40.256072784Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" grafana | logger=migrator t=2024-05-23T17:02:40.33996179Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=83.886816ms grafana | logger=migrator t=2024-05-23T17:02:40.343678772Z level=info msg="Executing migration" id="add unique index builtin_role_name back" grafana | logger=migrator t=2024-05-23T17:02:40.344684641Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=1.006249ms grafana | logger=migrator t=2024-05-23T17:02:40.350167089Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" grafana | logger=migrator t=2024-05-23T17:02:40.351540712Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.373063ms grafana | logger=migrator t=2024-05-23T17:02:40.354872831Z level=info msg="Executing migration" id="add primary key to seed_assigment" grafana | logger=migrator t=2024-05-23T17:02:40.383156879Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=28.284698ms grafana | logger=migrator t=2024-05-23T17:02:40.416107598Z level=info msg="Executing migration" id="add origin column to seed_assignment" grafana | logger=migrator t=2024-05-23T17:02:40.428352995Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=12.238867ms grafana | logger=migrator t=2024-05-23T17:02:40.43459005Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" grafana | logger=migrator t=2024-05-23T17:02:40.434943183Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=353.243µs grafana | logger=migrator t=2024-05-23T17:02:40.438480794Z level=info msg="Executing migration" id="prevent seeding OnCall access" grafana | logger=migrator t=2024-05-23T17:02:40.438680216Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=199.812µs grafana | logger=migrator t=2024-05-23T17:02:40.442318118Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" grafana | logger=migrator t=2024-05-23T17:02:40.442685971Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=371.213µs grafana | logger=migrator t=2024-05-23T17:02:40.447969077Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" grafana | logger=migrator t=2024-05-23T17:02:40.448322081Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=352.653µs grafana | logger=migrator t=2024-05-23T17:02:40.452710149Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" grafana | logger=migrator t=2024-05-23T17:02:40.452998662Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=287.983µs grafana | logger=migrator t=2024-05-23T17:02:40.457175698Z level=info msg="Executing migration" id="create folder table" grafana | logger=migrator t=2024-05-23T17:02:40.458348128Z level=info msg="Migration successfully executed" id="create folder table" duration=1.16919ms grafana | logger=migrator t=2024-05-23T17:02:40.46307689Z level=info msg="Executing migration" id="Add index for parent_uid" grafana | logger=migrator t=2024-05-23T17:02:40.464961597Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.884007ms grafana | logger=migrator t=2024-05-23T17:02:40.471248942Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" grafana | logger=migrator t=2024-05-23T17:02:40.472505133Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.255561ms grafana | logger=migrator t=2024-05-23T17:02:40.476372657Z level=info msg="Executing migration" id="Update folder title length" grafana | logger=migrator t=2024-05-23T17:02:40.476402987Z level=info msg="Migration successfully executed" id="Update folder title length" duration=30.96µs grafana | logger=migrator t=2024-05-23T17:02:40.483493899Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" grafana | logger=migrator t=2024-05-23T17:02:40.485549527Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=2.054728ms grafana | logger=migrator t=2024-05-23T17:02:40.490278299Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" grafana | logger=migrator t=2024-05-23T17:02:40.491440139Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.16221ms grafana | logger=migrator t=2024-05-23T17:02:40.494778408Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" grafana | logger=migrator t=2024-05-23T17:02:40.496634894Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.858806ms grafana | logger=migrator t=2024-05-23T17:02:40.502645247Z level=info msg="Executing migration" id="Sync dashboard and folder table" grafana | logger=migrator t=2024-05-23T17:02:40.503360104Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=714.547µs grafana | logger=migrator t=2024-05-23T17:02:40.508452238Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" grafana | logger=migrator t=2024-05-23T17:02:40.508882202Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=430.014µs grafana | logger=migrator t=2024-05-23T17:02:40.513830086Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" grafana | logger=migrator t=2024-05-23T17:02:40.515042476Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=1.21247ms grafana | logger=migrator t=2024-05-23T17:02:40.520716566Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" grafana | logger=migrator t=2024-05-23T17:02:40.522671563Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=1.955987ms grafana | logger=migrator t=2024-05-23T17:02:40.531116387Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" grafana | logger=migrator t=2024-05-23T17:02:40.532379098Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=1.262451ms grafana | logger=migrator t=2024-05-23T17:02:40.536899468Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" grafana | logger=migrator t=2024-05-23T17:02:40.538246Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=1.346762ms grafana | logger=migrator t=2024-05-23T17:02:40.54392787Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" grafana | logger=migrator t=2024-05-23T17:02:40.545017089Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.089449ms grafana | logger=migrator t=2024-05-23T17:02:40.548382478Z level=info msg="Executing migration" id="create anon_device table" grafana | logger=migrator t=2024-05-23T17:02:40.54974409Z level=info msg="Migration successfully executed" id="create anon_device table" duration=1.361332ms grafana | logger=migrator t=2024-05-23T17:02:40.55314192Z level=info msg="Executing migration" id="add unique index anon_device.device_id" grafana | logger=migrator t=2024-05-23T17:02:40.555195179Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=2.049188ms grafana | logger=migrator t=2024-05-23T17:02:40.55999654Z level=info msg="Executing migration" id="add index anon_device.updated_at" grafana | logger=migrator t=2024-05-23T17:02:40.561226491Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.230051ms grafana | logger=migrator t=2024-05-23T17:02:40.567767948Z level=info msg="Executing migration" id="create signing_key table" grafana | logger=migrator t=2024-05-23T17:02:40.569213681Z level=info msg="Migration successfully executed" id="create signing_key table" duration=1.444203ms grafana | logger=migrator t=2024-05-23T17:02:40.575512877Z level=info msg="Executing migration" id="add unique index signing_key.key_id" grafana | logger=migrator t=2024-05-23T17:02:40.577321643Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.809026ms grafana | logger=migrator t=2024-05-23T17:02:40.582017064Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" grafana | logger=migrator t=2024-05-23T17:02:40.583680198Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.668764ms grafana | logger=migrator t=2024-05-23T17:02:40.587735654Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" grafana | logger=migrator t=2024-05-23T17:02:40.588091257Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=356.473µs grafana | logger=migrator t=2024-05-23T17:02:40.59185077Z level=info msg="Executing migration" id="Add folder_uid for dashboard" grafana | logger=migrator t=2024-05-23T17:02:40.601725507Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=9.873407ms grafana | logger=migrator t=2024-05-23T17:02:40.607047754Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" grafana | logger=migrator t=2024-05-23T17:02:40.609013091Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=1.965957ms grafana | logger=migrator t=2024-05-23T17:02:40.646118326Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" grafana | logger=migrator t=2024-05-23T17:02:40.646147257Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=30.131µs grafana | logger=migrator t=2024-05-23T17:02:40.64995197Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" grafana | logger=migrator t=2024-05-23T17:02:40.651824026Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=1.875776ms grafana | logger=migrator t=2024-05-23T17:02:40.65564109Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" grafana | logger=migrator t=2024-05-23T17:02:40.65565898Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=19.07µs grafana | logger=migrator t=2024-05-23T17:02:40.661473541Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" grafana | logger=migrator t=2024-05-23T17:02:40.662874343Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.400432ms grafana | logger=migrator t=2024-05-23T17:02:40.666465895Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" grafana | logger=migrator t=2024-05-23T17:02:40.667804527Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=1.338642ms grafana | logger=migrator t=2024-05-23T17:02:40.671096255Z level=info msg="Executing migration" id="create sso_setting table" grafana | logger=migrator t=2024-05-23T17:02:40.672235875Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.13931ms grafana | logger=migrator t=2024-05-23T17:02:40.676790265Z level=info msg="Executing migration" id="copy kvstore migration status to each org" grafana | logger=migrator t=2024-05-23T17:02:40.677644093Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=851.978µs grafana | logger=migrator t=2024-05-23T17:02:40.681438166Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" grafana | logger=migrator t=2024-05-23T17:02:40.681752609Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=315.143µs grafana | logger=migrator t=2024-05-23T17:02:40.688053304Z level=info msg="Executing migration" id="alter kv_store.value to longtext" grafana | logger=migrator t=2024-05-23T17:02:40.688190675Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=135.411µs grafana | logger=migrator t=2024-05-23T17:02:40.693063948Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" grafana | logger=migrator t=2024-05-23T17:02:40.706019152Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=12.956034ms grafana | logger=migrator t=2024-05-23T17:02:40.709378311Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" grafana | logger=migrator t=2024-05-23T17:02:40.719123287Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=9.744436ms grafana | logger=migrator t=2024-05-23T17:02:40.72742534Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" grafana | logger=migrator t=2024-05-23T17:02:40.727771193Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=346.963µs grafana | logger=migrator t=2024-05-23T17:02:40.733044299Z level=info msg="migrations completed" performed=551 skipped=0 duration=5.364207563s grafana | logger=sqlstore t=2024-05-23T17:02:40.745840951Z level=info msg="Created default admin" user=admin grafana | logger=sqlstore t=2024-05-23T17:02:40.746126304Z level=info msg="Created default organization" grafana | logger=secrets t=2024-05-23T17:02:40.752838512Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 grafana | logger=plugin.store t=2024-05-23T17:02:40.783532092Z level=info msg="Loading plugins..." grafana | logger=local.finder t=2024-05-23T17:02:40.82326626Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled grafana | logger=plugin.store t=2024-05-23T17:02:40.823297131Z level=info msg="Plugins loaded" count=55 duration=39.764289ms grafana | logger=query_data t=2024-05-23T17:02:40.825858753Z level=info msg="Query Service initialization" grafana | logger=live.push_http t=2024-05-23T17:02:40.829464565Z level=info msg="Live Push Gateway initialization" grafana | logger=ngalert.migration t=2024-05-23T17:02:40.837251183Z level=info msg=Starting grafana | logger=ngalert.migration t=2024-05-23T17:02:40.838217062Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false grafana | logger=ngalert.migration orgID=1 t=2024-05-23T17:02:40.839086469Z level=info msg="Migrating alerts for organisation" grafana | logger=ngalert.migration orgID=1 t=2024-05-23T17:02:40.840462662Z level=info msg="Alerts found to migrate" alerts=0 grafana | logger=ngalert.migration t=2024-05-23T17:02:40.843176236Z level=info msg="Completed alerting migration" grafana | logger=ngalert.state.manager t=2024-05-23T17:02:40.916237527Z level=info msg="Running in alternative execution of Error/NoData mode" grafana | logger=infra.usagestats.collector t=2024-05-23T17:02:40.917929041Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 grafana | logger=provisioning.datasources t=2024-05-23T17:02:40.920527274Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz grafana | logger=provisioning.alerting t=2024-05-23T17:02:40.942717179Z level=info msg="starting to provision alerting" grafana | logger=provisioning.alerting t=2024-05-23T17:02:40.942756069Z level=info msg="finished to provision alerting" grafana | logger=grafanaStorageLogger t=2024-05-23T17:02:40.943128942Z level=info msg="Storage starting" grafana | logger=ngalert.state.manager t=2024-05-23T17:02:40.943231293Z level=info msg="Warming state cache for startup" grafana | logger=http.server t=2024-05-23T17:02:40.947898504Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= grafana | logger=ngalert.multiorg.alertmanager t=2024-05-23T17:02:40.948384569Z level=info msg="Starting MultiOrg Alertmanager" grafana | logger=grafana.update.checker t=2024-05-23T17:02:41.027248641Z level=info msg="Update check succeeded" duration=79.221995ms grafana | logger=plugins.update.checker t=2024-05-23T17:02:41.038425028Z level=info msg="Update check succeeded" duration=95.332946ms grafana | logger=provisioning.dashboard t=2024-05-23T17:02:41.069105097Z level=info msg="starting to provision dashboards" grafana | logger=ngalert.state.manager t=2024-05-23T17:02:41.15377209Z level=info msg="State cache has been initialized" states=0 duration=210.530667ms grafana | logger=ngalert.scheduler t=2024-05-23T17:02:41.153884311Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1 grafana | logger=ticker t=2024-05-23T17:02:41.154022082Z level=info msg=starting first_tick=2024-05-23T17:02:50Z grafana | logger=grafana-apiserver t=2024-05-23T17:02:41.332459257Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2024-05-23T17:02:41.333519936Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" grafana | logger=provisioning.dashboard t=2024-05-23T17:02:41.35339691Z level=info msg="finished to provision dashboards" grafana | logger=infra.usagestats t=2024-05-23T17:03:36.955854834Z level=info msg="Usage stats are ready to report" =================================== ======== Logs from kafka ======== kafka | ===> User kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) kafka | ===> Configuring ... kafka | Running in Zookeeper mode... kafka | ===> Running preflight checks ... kafka | ===> Check if /var/lib/kafka/data is writable ... kafka | ===> Check if Zookeeper is healthy ... kafka | [2024-05-23 17:02:36,827] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-23 17:02:36,828] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-23 17:02:36,828] INFO Client environment:java.version=11.0.22 (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-23 17:02:36,828] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-23 17:02:36,828] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-23 17:02:36,828] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.6.1-ccs.jar:/usr/share/java/cp-base-new/commons-validator-1.7.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-io-2.11.0.jar:/usr/share/java/cp-base-new/kafka-clients-7.6.1-ccs.jar:/usr/share/java/cp-base-new/utility-belt-7.6.1.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/kafka-server-common-7.6.1-ccs.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.6.1-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.6.1.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/kafka-storage-7.6.1-ccs.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.5-1.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/commons-beanutils-1.9.4.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.6.1.jar:/usr/share/java/cp-base-new/commons-digester-2.1.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/kafka-raft-7.6.1-ccs.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/error_prone_annotations-2.10.0.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/checker-qual-3.19.0.jar:/usr/share/java/cp-base-new/kafka-metadata-7.6.1-ccs.jar:/usr/share/java/cp-base-new/pcollections-4.0.1.jar:/usr/share/java/cp-base-new/commons-logging-1.2.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/commons-collections-3.2.2.jar:/usr/share/java/cp-base-new/caffeine-2.9.3.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.6.1-ccs.jar:/usr/share/java/cp-base-new/zookeeper-3.8.3.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/kafka_2.13-7.6.1-ccs.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar:/usr/share/java/cp-base-new/jose4j-0.9.5.jar (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-23 17:02:36,828] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-23 17:02:36,828] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-23 17:02:36,828] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-23 17:02:36,828] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-23 17:02:36,828] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-23 17:02:36,828] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-23 17:02:36,828] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-23 17:02:36,829] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-23 17:02:36,829] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-23 17:02:36,829] INFO Client environment:os.memory.free=493MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-23 17:02:36,829] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-23 17:02:36,829] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-23 17:02:36,832] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@b7f23d9 (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-23 17:02:36,835] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2024-05-23 17:02:36,843] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2024-05-23 17:02:36,854] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2024-05-23 17:02:36,869] INFO Opening socket connection to server zookeeper/172.17.0.2:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2024-05-23 17:02:36,869] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) kafka | [2024-05-23 17:02:36,883] INFO Socket connection established, initiating session, client: /172.17.0.7:43740, server: zookeeper/172.17.0.2:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2024-05-23 17:02:36,926] INFO Session establishment complete on server zookeeper/172.17.0.2:2181, session id = 0x1000002fe3b0000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) kafka | [2024-05-23 17:02:37,046] INFO Session: 0x1000002fe3b0000 closed (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-23 17:02:37,046] INFO EventThread shut down for session: 0x1000002fe3b0000 (org.apache.zookeeper.ClientCnxn) kafka | Using log4j config /etc/kafka/log4j.properties kafka | ===> Launching ... kafka | ===> Launching kafka ... kafka | [2024-05-23 17:02:37,736] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) kafka | [2024-05-23 17:02:38,115] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2024-05-23 17:02:38,234] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) kafka | [2024-05-23 17:02:38,236] INFO starting (kafka.server.KafkaServer) kafka | [2024-05-23 17:02:38,236] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) kafka | [2024-05-23 17:02:38,252] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-05-23 17:02:38,256] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-23 17:02:38,257] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-23 17:02:38,257] INFO Client environment:java.version=11.0.22 (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-23 17:02:38,257] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-23 17:02:38,257] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-23 17:02:38,257] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-json-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/trogdor-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-23 17:02:38,257] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-23 17:02:38,257] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-23 17:02:38,257] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-23 17:02:38,257] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-23 17:02:38,257] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-23 17:02:38,257] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-23 17:02:38,257] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-23 17:02:38,257] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-23 17:02:38,257] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-23 17:02:38,257] INFO Client environment:os.memory.free=1008MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-23 17:02:38,257] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-23 17:02:38,257] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-23 17:02:38,259] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@66746f57 (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-23 17:02:38,264] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2024-05-23 17:02:38,270] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2024-05-23 17:02:38,273] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-05-23 17:02:38,277] INFO Opening socket connection to server zookeeper/172.17.0.2:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2024-05-23 17:02:38,285] INFO Socket connection established, initiating session, client: /172.17.0.7:60594, server: zookeeper/172.17.0.2:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2024-05-23 17:02:38,294] INFO Session establishment complete on server zookeeper/172.17.0.2:2181, session id = 0x1000002fe3b0001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) kafka | [2024-05-23 17:02:38,299] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-05-23 17:02:38,615] INFO Cluster ID = Ve7S-UWnTtqwNqAszmlFEA (kafka.server.KafkaServer) kafka | [2024-05-23 17:02:38,619] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) kafka | [2024-05-23 17:02:38,679] INFO KafkaConfig values: kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 kafka | alter.config.policy.class.name = null kafka | alter.log.dirs.replication.quota.window.num = 11 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 kafka | authorizer.class.name = kafka | auto.create.topics.enable = true kafka | auto.include.jmx.reporter = true kafka | auto.leader.rebalance.enable = true kafka | background.threads = 10 kafka | broker.heartbeat.interval.ms = 2000 kafka | broker.id = 1 kafka | broker.id.generation.enable = true kafka | broker.rack = null kafka | broker.session.timeout.ms = 9000 kafka | client.quota.callback.class = null kafka | compression.type = producer kafka | connection.failed.authentication.delay.ms = 100 kafka | connections.max.idle.ms = 600000 kafka | connections.max.reauth.ms = 0 kafka | control.plane.listener.name = null kafka | controlled.shutdown.enable = true kafka | controlled.shutdown.max.retries = 3 kafka | controlled.shutdown.retry.backoff.ms = 5000 kafka | controller.listener.names = null kafka | controller.quorum.append.linger.ms = 25 kafka | controller.quorum.election.backoff.max.ms = 1000 kafka | controller.quorum.election.timeout.ms = 1000 kafka | controller.quorum.fetch.timeout.ms = 2000 kafka | controller.quorum.request.timeout.ms = 2000 kafka | controller.quorum.retry.backoff.ms = 20 kafka | controller.quorum.voters = [] kafka | controller.quota.window.num = 11 kafka | controller.quota.window.size.seconds = 1 kafka | controller.socket.timeout.ms = 30000 kafka | create.topic.policy.class.name = null kafka | default.replication.factor = 1 kafka | delegation.token.expiry.check.interval.ms = 3600000 kafka | delegation.token.expiry.time.ms = 86400000 kafka | delegation.token.master.key = null kafka | delegation.token.max.lifetime.ms = 604800000 kafka | delegation.token.secret.key = null kafka | delete.records.purgatory.purge.interval.requests = 1 kafka | delete.topic.enable = true kafka | early.start.listeners = null kafka | fetch.max.bytes = 57671680 kafka | fetch.purgatory.purge.interval.requests = 1000 kafka | group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.RangeAssignor] kafka | group.consumer.heartbeat.interval.ms = 5000 kafka | group.consumer.max.heartbeat.interval.ms = 15000 kafka | group.consumer.max.session.timeout.ms = 60000 kafka | group.consumer.max.size = 2147483647 kafka | group.consumer.min.heartbeat.interval.ms = 5000 kafka | group.consumer.min.session.timeout.ms = 45000 kafka | group.consumer.session.timeout.ms = 45000 kafka | group.coordinator.new.enable = false kafka | group.coordinator.threads = 1 kafka | group.initial.rebalance.delay.ms = 3000 kafka | group.max.session.timeout.ms = 1800000 kafka | group.max.size = 2147483647 kafka | group.min.session.timeout.ms = 6000 kafka | initial.broker.registration.timeout.ms = 60000 kafka | inter.broker.listener.name = PLAINTEXT kafka | inter.broker.protocol.version = 3.6-IV2 kafka | kafka.metrics.polling.interval.secs = 10 kafka | kafka.metrics.reporters = [] kafka | leader.imbalance.check.interval.seconds = 300 kafka | leader.imbalance.per.broker.percentage = 10 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 kafka | log.cleaner.backoff.ms = 15000 kafka | log.cleaner.dedupe.buffer.size = 134217728 kafka | log.cleaner.delete.retention.ms = 86400000 kafka | log.cleaner.enable = true kafka | log.cleaner.io.buffer.load.factor = 0.9 kafka | log.cleaner.io.buffer.size = 524288 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 kafka | log.cleaner.min.cleanable.ratio = 0.5 kafka | log.cleaner.min.compaction.lag.ms = 0 kafka | log.cleaner.threads = 1 kafka | log.cleanup.policy = [delete] kafka | log.dir = /tmp/kafka-logs kafka | log.dirs = /var/lib/kafka/data kafka | log.flush.interval.messages = 9223372036854775807 kafka | log.flush.interval.ms = null kafka | log.flush.offset.checkpoint.interval.ms = 60000 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 kafka | log.index.interval.bytes = 4096 kafka | log.index.size.max.bytes = 10485760 kafka | log.local.retention.bytes = -2 kafka | log.local.retention.ms = -2 kafka | log.message.downconversion.enable = true kafka | log.message.format.version = 3.0-IV1 kafka | log.message.timestamp.after.max.ms = 9223372036854775807 kafka | log.message.timestamp.before.max.ms = 9223372036854775807 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 kafka | log.message.timestamp.type = CreateTime kafka | log.preallocate = false kafka | log.retention.bytes = -1 kafka | log.retention.check.interval.ms = 300000 kafka | log.retention.hours = 168 kafka | log.retention.minutes = null kafka | log.retention.ms = null kafka | log.roll.hours = 168 kafka | log.roll.jitter.hours = 0 kafka | log.roll.jitter.ms = null kafka | log.roll.ms = null kafka | log.segment.bytes = 1073741824 kafka | log.segment.delete.delay.ms = 60000 kafka | max.connection.creation.rate = 2147483647 kafka | max.connections = 2147483647 kafka | max.connections.per.ip = 2147483647 kafka | max.connections.per.ip.overrides = kafka | max.incremental.fetch.session.cache.slots = 1000 kafka | message.max.bytes = 1048588 kafka | metadata.log.dir = null kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 kafka | metadata.log.max.snapshot.interval.ms = 3600000 kafka | metadata.log.segment.bytes = 1073741824 kafka | metadata.log.segment.min.bytes = 8388608 kafka | metadata.log.segment.ms = 604800000 kafka | metadata.max.idle.interval.ms = 500 kafka | metadata.max.retention.bytes = 104857600 kafka | metadata.max.retention.ms = 604800000 kafka | metric.reporters = [] kafka | metrics.num.samples = 2 kafka | metrics.recording.level = INFO kafka | metrics.sample.window.ms = 30000 kafka | min.insync.replicas = 1 kafka | node.id = 1 kafka | num.io.threads = 8 kafka | num.network.threads = 3 kafka | num.partitions = 1 kafka | num.recovery.threads.per.data.dir = 1 kafka | num.replica.alter.log.dirs.threads = null kafka | num.replica.fetchers = 1 kafka | offset.metadata.max.bytes = 4096 kafka | offsets.commit.required.acks = -1 kafka | offsets.commit.timeout.ms = 5000 kafka | offsets.load.buffer.size = 5242880 kafka | offsets.retention.check.interval.ms = 600000 kafka | offsets.retention.minutes = 10080 kafka | offsets.topic.compression.codec = 0 kafka | offsets.topic.num.partitions = 50 kafka | offsets.topic.replication.factor = 1 kafka | offsets.topic.segment.bytes = 104857600 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding kafka | password.encoder.iterations = 4096 kafka | password.encoder.key.length = 128 kafka | password.encoder.keyfactory.algorithm = null kafka | password.encoder.old.secret = null kafka | password.encoder.secret = null kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder kafka | process.roles = [] kafka | producer.id.expiration.check.interval.ms = 600000 kafka | producer.id.expiration.ms = 86400000 kafka | producer.purgatory.purge.interval.requests = 1000 kafka | queued.max.request.bytes = -1 kafka | queued.max.requests = 500 kafka | quota.window.num = 11 kafka | quota.window.size.seconds = 1 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 kafka | remote.log.manager.task.interval.ms = 30000 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 kafka | remote.log.manager.task.retry.backoff.ms = 500 kafka | remote.log.manager.task.retry.jitter = 0.2 kafka | remote.log.manager.thread.pool.size = 10 kafka | remote.log.metadata.custom.metadata.max.bytes = 128 kafka | remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager kafka | remote.log.metadata.manager.class.path = null kafka | remote.log.metadata.manager.impl.prefix = rlmm.config. kafka | remote.log.metadata.manager.listener.name = null kafka | remote.log.reader.max.pending.tasks = 100 kafka | remote.log.reader.threads = 10 kafka | remote.log.storage.manager.class.name = null kafka | remote.log.storage.manager.class.path = null kafka | remote.log.storage.manager.impl.prefix = rsm.config. kafka | remote.log.storage.system.enable = false kafka | replica.fetch.backoff.ms = 1000 kafka | replica.fetch.max.bytes = 1048576 kafka | replica.fetch.min.bytes = 1 kafka | replica.fetch.response.max.bytes = 10485760 kafka | replica.fetch.wait.max.ms = 500 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 kafka | replica.lag.time.max.ms = 30000 kafka | replica.selector.class = null kafka | replica.socket.receive.buffer.bytes = 65536 kafka | replica.socket.timeout.ms = 30000 kafka | replication.quota.window.num = 11 kafka | replication.quota.window.size.seconds = 1 kafka | request.timeout.ms = 30000 kafka | reserved.broker.max.id = 1000 kafka | sasl.client.callback.handler.class = null kafka | sasl.enabled.mechanisms = [GSSAPI] kafka | sasl.jaas.config = null kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | sasl.kerberos.min.time.before.relogin = 60000 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] kafka | sasl.kerberos.service.name = null kafka | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | sasl.login.callback.handler.class = null kafka | sasl.login.class = null kafka | sasl.login.connect.timeout.ms = null kafka | sasl.login.read.timeout.ms = null kafka | sasl.login.refresh.buffer.seconds = 300 kafka | sasl.login.refresh.min.period.seconds = 60 kafka | sasl.login.refresh.window.factor = 0.8 kafka | sasl.login.refresh.window.jitter = 0.05 kafka | sasl.login.retry.backoff.max.ms = 10000 kafka | sasl.login.retry.backoff.ms = 100 kafka | sasl.mechanism.controller.protocol = GSSAPI kafka | sasl.mechanism.inter.broker.protocol = GSSAPI kafka | sasl.oauthbearer.clock.skew.seconds = 30 kafka | sasl.oauthbearer.expected.audience = null kafka | sasl.oauthbearer.expected.issuer = null kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | sasl.oauthbearer.jwks.endpoint.url = null kafka | sasl.oauthbearer.scope.claim.name = scope kafka | sasl.oauthbearer.sub.claim.name = sub kafka | sasl.oauthbearer.token.endpoint.url = null kafka | sasl.server.callback.handler.class = null kafka | sasl.server.max.receive.size = 524288 kafka | security.inter.broker.protocol = PLAINTEXT kafka | security.providers = null kafka | server.max.startup.time.ms = 9223372036854775807 kafka | socket.connection.setup.timeout.max.ms = 30000 kafka | socket.connection.setup.timeout.ms = 10000 kafka | socket.listen.backlog.size = 50 kafka | socket.receive.buffer.bytes = 102400 kafka | socket.request.max.bytes = 104857600 kafka | socket.send.buffer.bytes = 102400 kafka | ssl.cipher.suites = [] kafka | ssl.client.auth = none kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | ssl.endpoint.identification.algorithm = https kafka | ssl.engine.factory.class = null kafka | ssl.key.password = null kafka | ssl.keymanager.algorithm = SunX509 kafka | ssl.keystore.certificate.chain = null kafka | ssl.keystore.key = null kafka | ssl.keystore.location = null kafka | ssl.keystore.password = null kafka | ssl.keystore.type = JKS kafka | ssl.principal.mapping.rules = DEFAULT kafka | ssl.protocol = TLSv1.3 kafka | ssl.provider = null kafka | ssl.secure.random.implementation = null kafka | ssl.trustmanager.algorithm = PKIX kafka | ssl.truststore.certificates = null kafka | ssl.truststore.location = null kafka | ssl.truststore.password = null kafka | ssl.truststore.type = JKS kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 kafka | transaction.max.timeout.ms = 900000 kafka | transaction.partition.verification.enable = true kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 kafka | transaction.state.log.load.buffer.size = 5242880 kafka | transaction.state.log.min.isr = 2 kafka | transaction.state.log.num.partitions = 50 kafka | transaction.state.log.replication.factor = 3 kafka | transaction.state.log.segment.bytes = 104857600 kafka | transactional.id.expiration.ms = 604800000 kafka | unclean.leader.election.enable = false kafka | unstable.api.versions.enable = false kafka | zookeeper.clientCnxnSocket = null kafka | zookeeper.connect = zookeeper:2181 kafka | zookeeper.connection.timeout.ms = null kafka | zookeeper.max.in.flight.requests = 10 kafka | zookeeper.metadata.migration.enable = false kafka | zookeeper.metadata.migration.min.batch.size = 200 kafka | zookeeper.session.timeout.ms = 18000 kafka | zookeeper.set.acl = false kafka | zookeeper.ssl.cipher.suites = null kafka | zookeeper.ssl.client.enable = false kafka | zookeeper.ssl.crl.enable = false kafka | zookeeper.ssl.enabled.protocols = null kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS kafka | zookeeper.ssl.keystore.location = null kafka | zookeeper.ssl.keystore.password = null kafka | zookeeper.ssl.keystore.type = null kafka | zookeeper.ssl.ocsp.enable = false kafka | zookeeper.ssl.protocol = TLSv1.2 kafka | zookeeper.ssl.truststore.location = null kafka | zookeeper.ssl.truststore.password = null kafka | zookeeper.ssl.truststore.type = null kafka | (kafka.server.KafkaConfig) kafka | [2024-05-23 17:02:38,717] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2024-05-23 17:02:38,719] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2024-05-23 17:02:38,720] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2024-05-23 17:02:38,726] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2024-05-23 17:02:38,768] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) kafka | [2024-05-23 17:02:38,774] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) kafka | [2024-05-23 17:02:38,785] INFO Loaded 0 logs in 17ms (kafka.log.LogManager) kafka | [2024-05-23 17:02:38,787] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) kafka | [2024-05-23 17:02:38,792] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) kafka | [2024-05-23 17:02:38,805] INFO Starting the log cleaner (kafka.log.LogCleaner) kafka | [2024-05-23 17:02:39,041] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) kafka | [2024-05-23 17:02:39,063] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) kafka | [2024-05-23 17:02:39,081] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) kafka | [2024-05-23 17:02:39,142] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) kafka | [2024-05-23 17:02:39,571] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2024-05-23 17:02:39,595] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) kafka | [2024-05-23 17:02:39,596] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2024-05-23 17:02:39,602] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) kafka | [2024-05-23 17:02:39,607] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) kafka | [2024-05-23 17:02:39,631] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-05-23 17:02:39,638] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-05-23 17:02:39,640] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-05-23 17:02:39,642] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-05-23 17:02:39,644] INFO [ExpirationReaper-1-RemoteFetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-05-23 17:02:39,665] INFO [AddPartitionsToTxnSenderThread-1]: Starting (kafka.server.AddPartitionsToTxnManager) kafka | [2024-05-23 17:02:39,666] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) kafka | [2024-05-23 17:02:39,691] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) kafka | [2024-05-23 17:02:39,740] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1716483759704,1716483759704,1,0,0,72057606893142017,258,0,27 kafka | (kafka.zk.KafkaZkClient) kafka | [2024-05-23 17:02:39,741] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) kafka | [2024-05-23 17:02:39,803] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) kafka | [2024-05-23 17:02:39,811] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-05-23 17:02:39,819] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-05-23 17:02:39,820] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-05-23 17:02:39,832] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) kafka | [2024-05-23 17:02:39,838] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-23 17:02:39,856] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) kafka | [2024-05-23 17:02:39,860] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-23 17:02:39,861] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) kafka | [2024-05-23 17:02:39,866] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) kafka | [2024-05-23 17:02:39,884] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2024-05-23 17:02:39,888] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) kafka | [2024-05-23 17:02:39,888] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2024-05-23 17:02:39,906] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) kafka | [2024-05-23 17:02:39,907] INFO [MetadataCache brokerId=1] Updated cache from existing None to latest Features(version=3.6-IV2, finalizedFeatures={}, finalizedFeaturesEpoch=0). (kafka.server.metadata.ZkMetadataCache) kafka | [2024-05-23 17:02:39,913] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) kafka | [2024-05-23 17:02:39,919] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) kafka | [2024-05-23 17:02:39,922] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) kafka | [2024-05-23 17:02:39,928] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-05-23 17:02:39,949] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) kafka | [2024-05-23 17:02:39,956] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) kafka | [2024-05-23 17:02:39,963] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) kafka | [2024-05-23 17:02:39,970] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) kafka | [2024-05-23 17:02:39,976] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) kafka | [2024-05-23 17:02:39,977] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) kafka | [2024-05-23 17:02:39,977] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) kafka | [2024-05-23 17:02:39,978] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) kafka | [2024-05-23 17:02:39,978] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) kafka | [2024-05-23 17:02:39,981] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) kafka | [2024-05-23 17:02:39,982] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) kafka | [2024-05-23 17:02:39,982] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) kafka | [2024-05-23 17:02:39,983] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) kafka | [2024-05-23 17:02:39,984] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) kafka | [2024-05-23 17:02:39,988] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) kafka | [2024-05-23 17:02:39,993] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) kafka | [2024-05-23 17:02:39,998] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) kafka | [2024-05-23 17:02:39,999] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) kafka | [2024-05-23 17:02:40,000] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) kafka | [2024-05-23 17:02:40,005] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) kafka | [2024-05-23 17:02:40,009] INFO [Controller id=1, targetBrokerId=1] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) kafka | [2024-05-23 17:02:40,013] WARN [Controller id=1, targetBrokerId=1] Connection to node 1 (kafka/172.17.0.7:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) kafka | [2024-05-23 17:02:40,021] WARN [RequestSendThread controllerId=1] Controller 1's connection to broker kafka:9092 (id: 1 rack: null) was unsuccessful (kafka.controller.RequestSendThread) kafka | java.io.IOException: Connection to kafka:9092 (id: 1 rack: null) failed. kafka | at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) kafka | at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:298) kafka | at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:251) kafka | at org.apache.kafka.server.util.ShutdownableThread.run(ShutdownableThread.java:130) kafka | [2024-05-23 17:02:40,024] INFO [Controller id=1, targetBrokerId=1] Client requested connection close from node 1 (org.apache.kafka.clients.NetworkClient) kafka | [2024-05-23 17:02:40,027] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) kafka | [2024-05-23 17:02:40,028] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) kafka | [2024-05-23 17:02:40,031] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) kafka | [2024-05-23 17:02:40,032] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) kafka | [2024-05-23 17:02:40,032] INFO Kafka version: 7.6.1-ccs (org.apache.kafka.common.utils.AppInfoParser) kafka | [2024-05-23 17:02:40,032] INFO Kafka commitId: 11e81ad2a49db00b1d2b8c731409cd09e563de67 (org.apache.kafka.common.utils.AppInfoParser) kafka | [2024-05-23 17:02:40,032] INFO Kafka startTimeMs: 1716483760026 (org.apache.kafka.common.utils.AppInfoParser) kafka | [2024-05-23 17:02:40,034] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) kafka | [2024-05-23 17:02:40,036] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) kafka | [2024-05-23 17:02:40,036] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) kafka | [2024-05-23 17:02:40,046] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) kafka | [2024-05-23 17:02:40,046] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) kafka | [2024-05-23 17:02:40,047] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) kafka | [2024-05-23 17:02:40,047] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) kafka | [2024-05-23 17:02:40,048] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) kafka | [2024-05-23 17:02:40,061] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) kafka | [2024-05-23 17:02:40,128] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) kafka | [2024-05-23 17:02:40,207] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-05-23 17:02:40,215] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) kafka | [2024-05-23 17:02:40,264] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) kafka | [2024-05-23 17:02:45,069] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) kafka | [2024-05-23 17:02:45,070] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) kafka | [2024-05-23 17:03:10,009] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2024-05-23 17:03:10,022] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2024-05-23 17:03:10,027] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) kafka | [2024-05-23 17:03:10,049] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) kafka | [2024-05-23 17:03:10,102] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(2BM6oo6kSEmNOxLHYnBz_A),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(Uae9V7kYT-OTpt3AebnaMg),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) kafka | [2024-05-23 17:03:10,104] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) kafka | [2024-05-23 17:03:10,109] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-05-23 17:03:10,110] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-05-23 17:03:10,110] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-05-23 17:03:10,110] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-05-23 17:03:10,110] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-05-23 17:03:10,110] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-05-23 17:03:10,110] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-05-23 17:03:10,110] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-05-23 17:03:10,110] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-05-23 17:03:10,111] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-05-23 17:03:10,111] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-05-23 17:03:10,111] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-05-23 17:03:10,111] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-05-23 17:03:10,111] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-05-23 17:03:10,111] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-05-23 17:03:10,111] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-05-23 17:03:10,111] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-05-23 17:03:10,111] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-05-23 17:03:10,112] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-05-23 17:03:10,112] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-05-23 17:03:10,112] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-05-23 17:03:10,112] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-05-23 17:03:10,112] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-05-23 17:03:10,112] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-05-23 17:03:10,112] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-05-23 17:03:10,112] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-05-23 17:03:10,112] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-05-23 17:03:10,113] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-05-23 17:03:10,113] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-05-23 17:03:10,113] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-05-23 17:03:10,113] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-05-23 17:03:10,113] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-05-23 17:03:10,113] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-05-23 17:03:10,113] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-05-23 17:03:10,113] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-05-23 17:03:10,113] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-05-23 17:03:10,114] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-05-23 17:03:10,114] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-05-23 17:03:10,114] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-05-23 17:03:10,114] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-05-23 17:03:10,114] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-05-23 17:03:10,114] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-05-23 17:03:10,114] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-05-23 17:03:10,114] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-05-23 17:03:10,114] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-05-23 17:03:10,114] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-05-23 17:03:10,115] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-05-23 17:03:10,115] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-05-23 17:03:10,115] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-05-23 17:03:10,115] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-05-23 17:03:10,115] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-05-23 17:03:10,115] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2024-05-23 17:03:10,122] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-05-23 17:03:10,122] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-05-23 17:03:10,122] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-05-23 17:03:10,122] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-05-23 17:03:10,122] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-05-23 17:03:10,122] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-05-23 17:03:10,122] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-05-23 17:03:10,123] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-05-23 17:03:10,123] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-05-23 17:03:10,123] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-05-23 17:03:10,123] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-05-23 17:03:10,123] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-05-23 17:03:10,123] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-05-23 17:03:10,123] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-05-23 17:03:10,123] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-05-23 17:03:10,123] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-05-23 17:03:10,123] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-05-23 17:03:10,123] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-05-23 17:03:10,123] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-05-23 17:03:10,123] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-05-23 17:03:10,124] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-05-23 17:03:10,124] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-05-23 17:03:10,124] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-05-23 17:03:10,124] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-05-23 17:03:10,124] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-05-23 17:03:10,124] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-05-23 17:03:10,124] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-05-23 17:03:10,124] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-05-23 17:03:10,124] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-05-23 17:03:10,124] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-05-23 17:03:10,124] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-05-23 17:03:10,124] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-05-23 17:03:10,124] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-05-23 17:03:10,124] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-05-23 17:03:10,125] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-05-23 17:03:10,125] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-05-23 17:03:10,125] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-05-23 17:03:10,125] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-05-23 17:03:10,125] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-05-23 17:03:10,125] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-05-23 17:03:10,125] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-05-23 17:03:10,125] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-05-23 17:03:10,125] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-05-23 17:03:10,125] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-05-23 17:03:10,125] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-05-23 17:03:10,125] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-05-23 17:03:10,125] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-05-23 17:03:10,126] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-05-23 17:03:10,126] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-05-23 17:03:10,126] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-05-23 17:03:10,126] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-05-23 17:03:10,126] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2024-05-23 17:03:10,397] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-05-23 17:03:10,397] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-05-23 17:03:10,398] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-05-23 17:03:10,398] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-05-23 17:03:10,398] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-05-23 17:03:10,398] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-05-23 17:03:10,398] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-05-23 17:03:10,398] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-05-23 17:03:10,398] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-05-23 17:03:10,398] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-05-23 17:03:10,398] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-05-23 17:03:10,398] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-05-23 17:03:10,398] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-05-23 17:03:10,398] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-05-23 17:03:10,398] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-05-23 17:03:10,398] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-05-23 17:03:10,398] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-05-23 17:03:10,398] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-05-23 17:03:10,398] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-05-23 17:03:10,398] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-05-23 17:03:10,398] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-05-23 17:03:10,398] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-05-23 17:03:10,398] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-05-23 17:03:10,398] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-05-23 17:03:10,398] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-05-23 17:03:10,398] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-05-23 17:03:10,398] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-05-23 17:03:10,399] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-05-23 17:03:10,399] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-05-23 17:03:10,399] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-05-23 17:03:10,399] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-05-23 17:03:10,399] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-05-23 17:03:10,399] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-05-23 17:03:10,399] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-05-23 17:03:10,399] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-05-23 17:03:10,399] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-05-23 17:03:10,399] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-05-23 17:03:10,399] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-05-23 17:03:10,399] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-05-23 17:03:10,399] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-05-23 17:03:10,399] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-05-23 17:03:10,399] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-05-23 17:03:10,399] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-05-23 17:03:10,399] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-05-23 17:03:10,399] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-05-23 17:03:10,399] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-05-23 17:03:10,399] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-05-23 17:03:10,399] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-05-23 17:03:10,399] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-05-23 17:03:10,399] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-05-23 17:03:10,399] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-05-23 17:03:10,402] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) kafka | [2024-05-23 17:03:10,402] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) kafka | [2024-05-23 17:03:10,402] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) kafka | [2024-05-23 17:03:10,402] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) kafka | [2024-05-23 17:03:10,402] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) kafka | [2024-05-23 17:03:10,402] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) kafka | [2024-05-23 17:03:10,402] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) kafka | [2024-05-23 17:03:10,402] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) kafka | [2024-05-23 17:03:10,402] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) kafka | [2024-05-23 17:03:10,402] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) kafka | [2024-05-23 17:03:10,402] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) kafka | [2024-05-23 17:03:10,402] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) kafka | [2024-05-23 17:03:10,402] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) kafka | [2024-05-23 17:03:10,402] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) kafka | [2024-05-23 17:03:10,402] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) kafka | [2024-05-23 17:03:10,402] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) kafka | [2024-05-23 17:03:10,402] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) kafka | [2024-05-23 17:03:10,402] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) kafka | [2024-05-23 17:03:10,402] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) kafka | [2024-05-23 17:03:10,402] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) kafka | [2024-05-23 17:03:10,402] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) kafka | [2024-05-23 17:03:10,402] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) kafka | [2024-05-23 17:03:10,403] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) kafka | [2024-05-23 17:03:10,403] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) kafka | [2024-05-23 17:03:10,403] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) kafka | [2024-05-23 17:03:10,403] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) kafka | [2024-05-23 17:03:10,403] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) kafka | [2024-05-23 17:03:10,403] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) kafka | [2024-05-23 17:03:10,403] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) kafka | [2024-05-23 17:03:10,403] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) kafka | [2024-05-23 17:03:10,403] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) kafka | [2024-05-23 17:03:10,403] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) kafka | [2024-05-23 17:03:10,403] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) kafka | [2024-05-23 17:03:10,403] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) kafka | [2024-05-23 17:03:10,403] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) kafka | [2024-05-23 17:03:10,403] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) kafka | [2024-05-23 17:03:10,403] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) kafka | [2024-05-23 17:03:10,403] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) kafka | [2024-05-23 17:03:10,403] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) kafka | [2024-05-23 17:03:10,403] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) kafka | [2024-05-23 17:03:10,403] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) kafka | [2024-05-23 17:03:10,403] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) kafka | [2024-05-23 17:03:10,403] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) kafka | [2024-05-23 17:03:10,403] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) kafka | [2024-05-23 17:03:10,403] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) kafka | [2024-05-23 17:03:10,403] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) kafka | [2024-05-23 17:03:10,403] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) kafka | [2024-05-23 17:03:10,403] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) kafka | [2024-05-23 17:03:10,403] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) kafka | [2024-05-23 17:03:10,403] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) kafka | [2024-05-23 17:03:10,404] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) kafka | [2024-05-23 17:03:10,404] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) kafka | [2024-05-23 17:03:10,407] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) kafka | [2024-05-23 17:03:10,408] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-05-23 17:03:10,409] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-05-23 17:03:10,409] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-05-23 17:03:10,409] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-05-23 17:03:10,409] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-05-23 17:03:10,409] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-05-23 17:03:10,409] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-05-23 17:03:10,409] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-05-23 17:03:10,409] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-05-23 17:03:10,409] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-05-23 17:03:10,409] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-05-23 17:03:10,409] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-05-23 17:03:10,409] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-05-23 17:03:10,409] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-05-23 17:03:10,409] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-05-23 17:03:10,410] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-05-23 17:03:10,410] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-05-23 17:03:10,410] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-05-23 17:03:10,410] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-05-23 17:03:10,410] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-05-23 17:03:10,410] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-05-23 17:03:10,410] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-05-23 17:03:10,410] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-05-23 17:03:10,410] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-05-23 17:03:10,410] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-05-23 17:03:10,410] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-05-23 17:03:10,410] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-05-23 17:03:10,410] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-05-23 17:03:10,411] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-05-23 17:03:10,411] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-05-23 17:03:10,411] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-05-23 17:03:10,411] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-05-23 17:03:10,411] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-05-23 17:03:10,411] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-05-23 17:03:10,411] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-05-23 17:03:10,411] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-05-23 17:03:10,411] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-05-23 17:03:10,411] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-05-23 17:03:10,411] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-05-23 17:03:10,411] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-05-23 17:03:10,412] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-05-23 17:03:10,412] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-05-23 17:03:10,412] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-05-23 17:03:10,412] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-05-23 17:03:10,412] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-05-23 17:03:10,412] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-05-23 17:03:10,412] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-05-23 17:03:10,412] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-05-23 17:03:10,412] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-05-23 17:03:10,412] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-05-23 17:03:10,412] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-05-23 17:03:10,412] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2024-05-23 17:03:10,421] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) kafka | [2024-05-23 17:03:10,422] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-05-23 17:03:10,422] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-05-23 17:03:10,422] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-05-23 17:03:10,422] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-05-23 17:03:10,423] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-05-23 17:03:10,423] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-05-23 17:03:10,423] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-05-23 17:03:10,423] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-05-23 17:03:10,425] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-05-23 17:03:10,426] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-05-23 17:03:10,426] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-05-23 17:03:10,426] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-05-23 17:03:10,426] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-05-23 17:03:10,426] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-05-23 17:03:10,426] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-05-23 17:03:10,426] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-05-23 17:03:10,426] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-05-23 17:03:10,426] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-05-23 17:03:10,426] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-05-23 17:03:10,426] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-05-23 17:03:10,426] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-05-23 17:03:10,426] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-05-23 17:03:10,427] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-05-23 17:03:10,427] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-05-23 17:03:10,427] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-05-23 17:03:10,427] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-05-23 17:03:10,427] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-05-23 17:03:10,427] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-05-23 17:03:10,427] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-05-23 17:03:10,427] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-05-23 17:03:10,427] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-05-23 17:03:10,427] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-05-23 17:03:10,428] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-05-23 17:03:10,428] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-05-23 17:03:10,428] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-05-23 17:03:10,428] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-05-23 17:03:10,428] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-05-23 17:03:10,428] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-05-23 17:03:10,428] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-05-23 17:03:10,428] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-05-23 17:03:10,428] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-05-23 17:03:10,428] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-05-23 17:03:10,428] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-05-23 17:03:10,429] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-05-23 17:03:10,429] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-05-23 17:03:10,429] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-05-23 17:03:10,429] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-05-23 17:03:10,429] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-05-23 17:03:10,429] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-05-23 17:03:10,429] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-05-23 17:03:10,429] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-05-23 17:03:10,489] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2024-05-23 17:03:10,489] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2024-05-23 17:03:10,489] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2024-05-23 17:03:10,489] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2024-05-23 17:03:10,489] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2024-05-23 17:03:10,489] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2024-05-23 17:03:10,489] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2024-05-23 17:03:10,489] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2024-05-23 17:03:10,489] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2024-05-23 17:03:10,489] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2024-05-23 17:03:10,489] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2024-05-23 17:03:10,489] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2024-05-23 17:03:10,489] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2024-05-23 17:03:10,489] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2024-05-23 17:03:10,489] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2024-05-23 17:03:10,489] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2024-05-23 17:03:10,489] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2024-05-23 17:03:10,490] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2024-05-23 17:03:10,490] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2024-05-23 17:03:10,490] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2024-05-23 17:03:10,490] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2024-05-23 17:03:10,490] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2024-05-23 17:03:10,490] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2024-05-23 17:03:10,490] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2024-05-23 17:03:10,490] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2024-05-23 17:03:10,490] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2024-05-23 17:03:10,490] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2024-05-23 17:03:10,490] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2024-05-23 17:03:10,490] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2024-05-23 17:03:10,490] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2024-05-23 17:03:10,490] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2024-05-23 17:03:10,490] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2024-05-23 17:03:10,490] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2024-05-23 17:03:10,490] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2024-05-23 17:03:10,490] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2024-05-23 17:03:10,490] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2024-05-23 17:03:10,490] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2024-05-23 17:03:10,490] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2024-05-23 17:03:10,490] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2024-05-23 17:03:10,490] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2024-05-23 17:03:10,490] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2024-05-23 17:03:10,490] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2024-05-23 17:03:10,490] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2024-05-23 17:03:10,490] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2024-05-23 17:03:10,490] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2024-05-23 17:03:10,490] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2024-05-23 17:03:10,490] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2024-05-23 17:03:10,490] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2024-05-23 17:03:10,490] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2024-05-23 17:03:10,490] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2024-05-23 17:03:10,490] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2024-05-23 17:03:10,491] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) kafka | [2024-05-23 17:03:10,492] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) kafka | [2024-05-23 17:03:10,614] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-23 17:03:10,631] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-23 17:03:10,642] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:10,644] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:10,647] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(Uae9V7kYT-OTpt3AebnaMg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-23 17:03:10,684] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-23 17:03:10,685] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-23 17:03:10,685] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:10,685] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:10,685] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(Uae9V7kYT-OTpt3AebnaMg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-23 17:03:10,696] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-23 17:03:10,697] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-23 17:03:10,697] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:10,697] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:10,697] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(Uae9V7kYT-OTpt3AebnaMg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-23 17:03:10,720] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-23 17:03:10,721] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-23 17:03:10,721] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:10,721] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:10,721] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(Uae9V7kYT-OTpt3AebnaMg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-23 17:03:10,735] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-23 17:03:10,735] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-23 17:03:10,735] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:10,736] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:10,736] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(Uae9V7kYT-OTpt3AebnaMg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-23 17:03:10,748] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-23 17:03:10,749] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-23 17:03:10,749] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:10,749] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:10,750] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(Uae9V7kYT-OTpt3AebnaMg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-23 17:03:10,760] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-23 17:03:10,761] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-23 17:03:10,761] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:10,761] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:10,761] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(Uae9V7kYT-OTpt3AebnaMg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-23 17:03:10,770] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-23 17:03:10,771] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-23 17:03:10,771] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:10,771] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:10,771] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(Uae9V7kYT-OTpt3AebnaMg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-23 17:03:10,784] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-23 17:03:10,785] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-23 17:03:10,785] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:10,785] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:10,786] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(Uae9V7kYT-OTpt3AebnaMg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-23 17:03:10,795] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-23 17:03:10,796] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-23 17:03:10,796] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:10,796] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:10,796] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(Uae9V7kYT-OTpt3AebnaMg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-23 17:03:10,806] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-23 17:03:10,807] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-23 17:03:10,807] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:10,807] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:10,807] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(Uae9V7kYT-OTpt3AebnaMg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-23 17:03:10,818] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-23 17:03:10,820] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-23 17:03:10,820] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:10,820] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:10,821] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(Uae9V7kYT-OTpt3AebnaMg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-23 17:03:10,831] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-23 17:03:10,832] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-23 17:03:10,832] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:10,832] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:10,832] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(Uae9V7kYT-OTpt3AebnaMg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-23 17:03:10,846] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-23 17:03:10,847] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-23 17:03:10,847] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:10,847] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:10,847] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(Uae9V7kYT-OTpt3AebnaMg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-23 17:03:10,883] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-23 17:03:10,884] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-23 17:03:10,884] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:10,884] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:10,884] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(Uae9V7kYT-OTpt3AebnaMg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-23 17:03:10,893] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-23 17:03:10,893] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-23 17:03:10,893] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:10,894] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:10,894] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(Uae9V7kYT-OTpt3AebnaMg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-23 17:03:10,900] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-23 17:03:10,903] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-23 17:03:10,903] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:10,903] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:10,903] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(Uae9V7kYT-OTpt3AebnaMg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-23 17:03:10,920] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-23 17:03:10,921] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-23 17:03:10,921] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:10,922] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:10,922] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(Uae9V7kYT-OTpt3AebnaMg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-23 17:03:10,941] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-23 17:03:10,943] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-23 17:03:10,943] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:10,943] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:10,943] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(Uae9V7kYT-OTpt3AebnaMg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-23 17:03:10,955] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-23 17:03:10,956] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-23 17:03:10,956] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:10,956] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:10,956] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(Uae9V7kYT-OTpt3AebnaMg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-23 17:03:10,967] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-23 17:03:10,968] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-23 17:03:10,968] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:10,968] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:10,969] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(Uae9V7kYT-OTpt3AebnaMg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-23 17:03:10,980] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-23 17:03:10,981] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-23 17:03:10,981] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:10,981] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:10,981] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(Uae9V7kYT-OTpt3AebnaMg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-23 17:03:10,993] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-23 17:03:10,994] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-23 17:03:10,994] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:10,994] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:10,994] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(Uae9V7kYT-OTpt3AebnaMg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-23 17:03:11,006] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-23 17:03:11,008] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-23 17:03:11,011] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:11,011] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:11,012] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(Uae9V7kYT-OTpt3AebnaMg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-23 17:03:11,019] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-23 17:03:11,019] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-23 17:03:11,019] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:11,019] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:11,020] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(Uae9V7kYT-OTpt3AebnaMg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-23 17:03:11,027] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-23 17:03:11,028] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-23 17:03:11,028] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:11,028] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:11,028] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(Uae9V7kYT-OTpt3AebnaMg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-23 17:03:11,049] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-23 17:03:11,052] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-23 17:03:11,052] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:11,052] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:11,053] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(Uae9V7kYT-OTpt3AebnaMg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-23 17:03:11,068] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-23 17:03:11,069] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-23 17:03:11,069] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:11,069] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:11,069] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(Uae9V7kYT-OTpt3AebnaMg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-23 17:03:11,085] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-23 17:03:11,086] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-23 17:03:11,086] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:11,086] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:11,087] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(Uae9V7kYT-OTpt3AebnaMg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-23 17:03:11,137] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-23 17:03:11,137] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-23 17:03:11,138] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:11,138] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:11,139] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(Uae9V7kYT-OTpt3AebnaMg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-23 17:03:11,160] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-23 17:03:11,161] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-23 17:03:11,162] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:11,162] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:11,163] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(Uae9V7kYT-OTpt3AebnaMg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-23 17:03:11,179] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-23 17:03:11,182] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-23 17:03:11,182] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:11,182] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:11,183] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(Uae9V7kYT-OTpt3AebnaMg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-23 17:03:11,204] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-23 17:03:11,208] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-23 17:03:11,209] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:11,209] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:11,209] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(Uae9V7kYT-OTpt3AebnaMg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-23 17:03:11,226] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-23 17:03:11,227] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-23 17:03:11,227] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:11,227] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:11,228] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(Uae9V7kYT-OTpt3AebnaMg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-23 17:03:11,247] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-23 17:03:11,248] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) kafka | [2024-05-23 17:03:11,248] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:11,248] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:11,248] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(2BM6oo6kSEmNOxLHYnBz_A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-23 17:03:11,263] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-23 17:03:11,265] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-23 17:03:11,265] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:11,265] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:11,265] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(Uae9V7kYT-OTpt3AebnaMg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-23 17:03:11,281] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-23 17:03:11,282] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-23 17:03:11,283] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:11,283] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:11,283] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(Uae9V7kYT-OTpt3AebnaMg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-23 17:03:11,303] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-23 17:03:11,304] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-23 17:03:11,304] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:11,304] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:11,304] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(Uae9V7kYT-OTpt3AebnaMg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-23 17:03:11,313] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-23 17:03:11,313] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-23 17:03:11,313] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:11,313] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:11,314] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(Uae9V7kYT-OTpt3AebnaMg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-23 17:03:11,346] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-23 17:03:11,346] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-23 17:03:11,346] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:11,347] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:11,347] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(Uae9V7kYT-OTpt3AebnaMg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-23 17:03:11,357] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-23 17:03:11,358] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-23 17:03:11,358] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:11,358] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:11,358] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(Uae9V7kYT-OTpt3AebnaMg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-23 17:03:11,367] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-23 17:03:11,367] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-23 17:03:11,367] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:11,367] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:11,368] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(Uae9V7kYT-OTpt3AebnaMg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-23 17:03:11,387] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-23 17:03:11,391] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-23 17:03:11,391] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:11,391] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:11,392] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(Uae9V7kYT-OTpt3AebnaMg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-23 17:03:11,401] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-23 17:03:11,402] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-23 17:03:11,402] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:11,402] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:11,402] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(Uae9V7kYT-OTpt3AebnaMg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-23 17:03:11,421] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-23 17:03:11,422] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-23 17:03:11,423] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:11,423] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:11,423] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(Uae9V7kYT-OTpt3AebnaMg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-23 17:03:11,433] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-23 17:03:11,434] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-23 17:03:11,434] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:11,434] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:11,434] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(Uae9V7kYT-OTpt3AebnaMg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-23 17:03:11,451] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-23 17:03:11,452] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-23 17:03:11,452] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:11,452] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:11,453] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(Uae9V7kYT-OTpt3AebnaMg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-23 17:03:11,463] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-23 17:03:11,464] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-23 17:03:11,464] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:11,464] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:11,465] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(Uae9V7kYT-OTpt3AebnaMg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-23 17:03:11,479] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-23 17:03:11,480] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-23 17:03:11,480] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:11,480] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:11,483] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(Uae9V7kYT-OTpt3AebnaMg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-23 17:03:11,497] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-23 17:03:11,498] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-23 17:03:11,498] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:11,498] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:11,499] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(Uae9V7kYT-OTpt3AebnaMg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-23 17:03:11,509] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-23 17:03:11,510] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-23 17:03:11,510] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:11,510] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-23 17:03:11,510] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(Uae9V7kYT-OTpt3AebnaMg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-23 17:03:11,548] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2024-05-23 17:03:11,549] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2024-05-23 17:03:11,549] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2024-05-23 17:03:11,549] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2024-05-23 17:03:11,549] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2024-05-23 17:03:11,549] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2024-05-23 17:03:11,549] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2024-05-23 17:03:11,549] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2024-05-23 17:03:11,549] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2024-05-23 17:03:11,549] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2024-05-23 17:03:11,549] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2024-05-23 17:03:11,549] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2024-05-23 17:03:11,549] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2024-05-23 17:03:11,549] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2024-05-23 17:03:11,549] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2024-05-23 17:03:11,549] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2024-05-23 17:03:11,549] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2024-05-23 17:03:11,549] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2024-05-23 17:03:11,549] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2024-05-23 17:03:11,549] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2024-05-23 17:03:11,550] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2024-05-23 17:03:11,550] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2024-05-23 17:03:11,550] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2024-05-23 17:03:11,550] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2024-05-23 17:03:11,550] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2024-05-23 17:03:11,550] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2024-05-23 17:03:11,550] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2024-05-23 17:03:11,550] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2024-05-23 17:03:11,550] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2024-05-23 17:03:11,550] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2024-05-23 17:03:11,550] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2024-05-23 17:03:11,550] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2024-05-23 17:03:11,550] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2024-05-23 17:03:11,551] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2024-05-23 17:03:11,551] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2024-05-23 17:03:11,551] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2024-05-23 17:03:11,551] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2024-05-23 17:03:11,551] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2024-05-23 17:03:11,551] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2024-05-23 17:03:11,551] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2024-05-23 17:03:11,551] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2024-05-23 17:03:11,551] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2024-05-23 17:03:11,551] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2024-05-23 17:03:11,551] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2024-05-23 17:03:11,551] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2024-05-23 17:03:11,551] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2024-05-23 17:03:11,552] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2024-05-23 17:03:11,552] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2024-05-23 17:03:11,552] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2024-05-23 17:03:11,552] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2024-05-23 17:03:11,552] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2024-05-23 17:03:11,562] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-23 17:03:11,564] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,565] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-23 17:03:11,565] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,565] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-23 17:03:11,565] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,565] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-23 17:03:11,565] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,565] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-23 17:03:11,565] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,565] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-23 17:03:11,565] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,565] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-23 17:03:11,565] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,565] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-23 17:03:11,565] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,565] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-23 17:03:11,565] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,565] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-23 17:03:11,565] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,566] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-23 17:03:11,566] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,566] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-23 17:03:11,566] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,566] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-23 17:03:11,566] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,566] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-23 17:03:11,566] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,566] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-23 17:03:11,566] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,566] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-23 17:03:11,566] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,566] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-23 17:03:11,566] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,566] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-23 17:03:11,566] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,566] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-23 17:03:11,566] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,566] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-23 17:03:11,566] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,566] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-23 17:03:11,566] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,566] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-23 17:03:11,566] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,566] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-23 17:03:11,566] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,566] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-23 17:03:11,566] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,566] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-23 17:03:11,566] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,567] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-23 17:03:11,567] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,567] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-23 17:03:11,567] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,567] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-23 17:03:11,567] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,567] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-23 17:03:11,567] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,567] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-23 17:03:11,567] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,567] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-23 17:03:11,567] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,567] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-23 17:03:11,567] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,567] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-23 17:03:11,567] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,567] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-23 17:03:11,567] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,567] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-23 17:03:11,567] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,567] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-23 17:03:11,567] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,567] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-23 17:03:11,567] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,567] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-23 17:03:11,568] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,568] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-23 17:03:11,568] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,568] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-23 17:03:11,568] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,568] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-23 17:03:11,568] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,568] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-23 17:03:11,568] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,568] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-23 17:03:11,568] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,568] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-23 17:03:11,568] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,568] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-23 17:03:11,568] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,568] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-23 17:03:11,568] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,568] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-23 17:03:11,568] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,568] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-23 17:03:11,568] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,568] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-23 17:03:11,568] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,568] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-23 17:03:11,568] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,572] INFO [Broker id=1] Finished LeaderAndIsr request in 1154ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) kafka | [2024-05-23 17:03:11,573] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 7 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,574] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,574] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,574] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,574] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,574] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,574] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,574] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,574] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,574] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,574] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,575] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,575] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,575] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,575] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,575] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,576] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 10 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,576] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,576] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,576] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,576] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,576] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,576] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,576] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,576] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,576] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,576] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,577] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 10 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,577] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,577] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,577] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,577] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,577] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,577] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,577] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,577] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,577] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,577] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,578] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,578] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,578] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,578] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=Uae9V7kYT-OTpt3AebnaMg, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=2BM6oo6kSEmNOxLHYnBz_A, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-05-23 17:03:11,578] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,582] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,582] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,582] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,582] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,582] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,582] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,583] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 15 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,583] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-23 17:03:11,595] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-23 17:03:11,595] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-23 17:03:11,595] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-23 17:03:11,595] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-23 17:03:11,595] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-23 17:03:11,595] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-23 17:03:11,595] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-23 17:03:11,595] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-23 17:03:11,595] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-23 17:03:11,595] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-23 17:03:11,595] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-23 17:03:11,595] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-23 17:03:11,595] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-23 17:03:11,595] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-23 17:03:11,595] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-23 17:03:11,595] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-23 17:03:11,595] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-23 17:03:11,595] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-23 17:03:11,595] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-23 17:03:11,595] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-23 17:03:11,595] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-23 17:03:11,595] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-23 17:03:11,596] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-23 17:03:11,596] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-23 17:03:11,596] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-23 17:03:11,596] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-23 17:03:11,596] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-23 17:03:11,596] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-23 17:03:11,596] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-23 17:03:11,596] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-23 17:03:11,596] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-23 17:03:11,596] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-23 17:03:11,596] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-23 17:03:11,596] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-23 17:03:11,596] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-23 17:03:11,596] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-23 17:03:11,596] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-23 17:03:11,596] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-23 17:03:11,596] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-23 17:03:11,596] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-23 17:03:11,596] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-23 17:03:11,596] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-23 17:03:11,596] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-23 17:03:11,596] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-23 17:03:11,596] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-23 17:03:11,596] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-23 17:03:11,596] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-23 17:03:11,596] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-23 17:03:11,596] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-23 17:03:11,596] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-23 17:03:11,596] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-23 17:03:11,597] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-23 17:03:11,601] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-05-23 17:03:11,679] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 6eb2b9f6-cc1f-4668-b97f-fa19dc06347c in Empty state. Created a new member id consumer-6eb2b9f6-cc1f-4668-b97f-fa19dc06347c-3-a95c645a-8554-4ece-a128-3e833d43c091 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-23 17:03:11,690] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-333eab6c-5cf1-4372-a66d-fcebf1c2237c and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-23 17:03:11,717] INFO [GroupCoordinator 1]: Preparing to rebalance group 6eb2b9f6-cc1f-4668-b97f-fa19dc06347c in state PreparingRebalance with old generation 0 (__consumer_offsets-6) (reason: Adding new member consumer-6eb2b9f6-cc1f-4668-b97f-fa19dc06347c-3-a95c645a-8554-4ece-a128-3e833d43c091 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-23 17:03:11,723] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-333eab6c-5cf1-4372-a66d-fcebf1c2237c with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-23 17:03:12,417] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 73e96feb-bc6b-4940-a2e1-bccba6481d37 in Empty state. Created a new member id consumer-73e96feb-bc6b-4940-a2e1-bccba6481d37-2-acec8750-84f7-4f09-8410-c7d8b6618f47 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-23 17:03:12,424] INFO [GroupCoordinator 1]: Preparing to rebalance group 73e96feb-bc6b-4940-a2e1-bccba6481d37 in state PreparingRebalance with old generation 0 (__consumer_offsets-44) (reason: Adding new member consumer-73e96feb-bc6b-4940-a2e1-bccba6481d37-2-acec8750-84f7-4f09-8410-c7d8b6618f47 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-23 17:03:14,729] INFO [GroupCoordinator 1]: Stabilized group 6eb2b9f6-cc1f-4668-b97f-fa19dc06347c generation 1 (__consumer_offsets-6) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-23 17:03:14,736] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-23 17:03:14,760] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-333eab6c-5cf1-4372-a66d-fcebf1c2237c for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-23 17:03:14,760] INFO [GroupCoordinator 1]: Assignment received from leader consumer-6eb2b9f6-cc1f-4668-b97f-fa19dc06347c-3-a95c645a-8554-4ece-a128-3e833d43c091 for group 6eb2b9f6-cc1f-4668-b97f-fa19dc06347c for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-23 17:03:15,426] INFO [GroupCoordinator 1]: Stabilized group 73e96feb-bc6b-4940-a2e1-bccba6481d37 generation 1 (__consumer_offsets-44) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-23 17:03:15,447] INFO [GroupCoordinator 1]: Assignment received from leader consumer-73e96feb-bc6b-4940-a2e1-bccba6481d37-2-acec8750-84f7-4f09-8410-c7d8b6618f47 for group 73e96feb-bc6b-4940-a2e1-bccba6481d37 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) =================================== ======== Logs from mariadb ======== mariadb | 2024-05-23 17:02:27+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. mariadb | 2024-05-23 17:02:27+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' mariadb | 2024-05-23 17:02:27+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. mariadb | 2024-05-23 17:02:27+00:00 [Note] [Entrypoint]: Initializing database files mariadb | 2024-05-23 17:02:27 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-05-23 17:02:27 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-05-23 17:02:27 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | mariadb | mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! mariadb | To do so, start the server, then issue the following command: mariadb | mariadb | '/usr/bin/mysql_secure_installation' mariadb | mariadb | which will also give you the option of removing the test mariadb | databases and anonymous user created by default. This is mariadb | strongly recommended for production servers. mariadb | mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb mariadb | mariadb | Please report any problems at https://mariadb.org/jira mariadb | mariadb | The latest information about MariaDB is available at https://mariadb.org/. mariadb | mariadb | Consider joining MariaDB's strong and vibrant community: mariadb | https://mariadb.org/get-involved/ mariadb | mariadb | 2024-05-23 17:02:29+00:00 [Note] [Entrypoint]: Database files initialized mariadb | 2024-05-23 17:02:29+00:00 [Note] [Entrypoint]: Starting temporary server mariadb | 2024-05-23 17:02:29+00:00 [Note] [Entrypoint]: Waiting for server startup mariadb | 2024-05-23 17:02:29 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 96 ... mariadb | 2024-05-23 17:02:29 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 mariadb | 2024-05-23 17:02:29 0 [Note] InnoDB: Number of transaction pools: 1 mariadb | 2024-05-23 17:02:29 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions mariadb | 2024-05-23 17:02:29 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) mariadb | 2024-05-23 17:02:29 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-05-23 17:02:29 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-05-23 17:02:29 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB mariadb | 2024-05-23 17:02:29 0 [Note] InnoDB: Completed initialization of buffer pool mariadb | 2024-05-23 17:02:29 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) mariadb | 2024-05-23 17:02:29 0 [Note] InnoDB: 128 rollback segments are active. mariadb | 2024-05-23 17:02:29 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... mariadb | 2024-05-23 17:02:29 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. mariadb | 2024-05-23 17:02:29 0 [Note] InnoDB: log sequence number 46590; transaction id 14 mariadb | 2024-05-23 17:02:29 0 [Note] Plugin 'FEEDBACK' is disabled. mariadb | 2024-05-23 17:02:29 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | 2024-05-23 17:02:29 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. mariadb | 2024-05-23 17:02:29 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. mariadb | 2024-05-23 17:02:29 0 [Note] mariadbd: ready for connections. mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution mariadb | 2024-05-23 17:02:30+00:00 [Note] [Entrypoint]: Temporary server started. mariadb | 2024-05-23 17:02:32+00:00 [Note] [Entrypoint]: Creating user policy_user mariadb | 2024-05-23 17:02:32+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) mariadb | mariadb | 2024-05-23 17:02:32+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf mariadb | mariadb | 2024-05-23 17:02:32+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh mariadb | #!/bin/bash -xv mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. mariadb | # mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); mariadb | # you may not use this file except in compliance with the License. mariadb | # You may obtain a copy of the License at mariadb | # mariadb | # http://www.apache.org/licenses/LICENSE-2.0 mariadb | # mariadb | # Unless required by applicable law or agreed to in writing, software mariadb | # distributed under the License is distributed on an "AS IS" BASIS, mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. mariadb | # See the License for the specific language governing permissions and mariadb | # limitations under the License. mariadb | mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | do mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" mariadb | done mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp mariadb | mariadb | 2024-05-23 17:02:33+00:00 [Note] [Entrypoint]: Stopping temporary server mariadb | 2024-05-23 17:02:33 0 [Note] mariadbd (initiated by: unknown): Normal shutdown mariadb | 2024-05-23 17:02:33 0 [Note] InnoDB: FTS optimize thread exiting. mariadb | 2024-05-23 17:02:33 0 [Note] InnoDB: Starting shutdown... mariadb | 2024-05-23 17:02:33 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool mariadb | 2024-05-23 17:02:33 0 [Note] InnoDB: Buffer pool(s) dump completed at 240523 17:02:33 mariadb | 2024-05-23 17:02:33 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" mariadb | 2024-05-23 17:02:33 0 [Note] InnoDB: Shutdown completed; log sequence number 327967; transaction id 298 mariadb | 2024-05-23 17:02:33 0 [Note] mariadbd: Shutdown complete mariadb | mariadb | 2024-05-23 17:02:33+00:00 [Note] [Entrypoint]: Temporary server stopped mariadb | mariadb | 2024-05-23 17:02:33+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. mariadb | mariadb | 2024-05-23 17:02:33 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... mariadb | 2024-05-23 17:02:33 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 mariadb | 2024-05-23 17:02:33 0 [Note] InnoDB: Number of transaction pools: 1 mariadb | 2024-05-23 17:02:33 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions mariadb | 2024-05-23 17:02:33 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) mariadb | 2024-05-23 17:02:33 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-05-23 17:02:33 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-05-23 17:02:33 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB mariadb | 2024-05-23 17:02:33 0 [Note] InnoDB: Completed initialization of buffer pool mariadb | 2024-05-23 17:02:33 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) mariadb | 2024-05-23 17:02:33 0 [Note] InnoDB: 128 rollback segments are active. mariadb | 2024-05-23 17:02:33 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... mariadb | 2024-05-23 17:02:33 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. mariadb | 2024-05-23 17:02:33 0 [Note] InnoDB: log sequence number 327967; transaction id 299 mariadb | 2024-05-23 17:02:33 0 [Note] Plugin 'FEEDBACK' is disabled. mariadb | 2024-05-23 17:02:33 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool mariadb | 2024-05-23 17:02:33 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | 2024-05-23 17:02:33 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. mariadb | 2024-05-23 17:02:33 0 [Note] Server socket created on IP: '0.0.0.0'. mariadb | 2024-05-23 17:02:33 0 [Note] Server socket created on IP: '::'. mariadb | 2024-05-23 17:02:33 0 [Note] mariadbd: ready for connections. mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution mariadb | 2024-05-23 17:02:33 0 [Note] InnoDB: Buffer pool(s) load completed at 240523 17:02:33 mariadb | 2024-05-23 17:02:33 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.11' (This connection closed normally without authentication) mariadb | 2024-05-23 17:02:33 4 [Warning] Aborted connection 4 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.10' (This connection closed normally without authentication) mariadb | 2024-05-23 17:02:33 5 [Warning] Aborted connection 5 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.6' (This connection closed normally without authentication) mariadb | 2024-05-23 17:02:34 43 [Warning] Aborted connection 43 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.9' (This connection closed normally without authentication) =================================== ======== Logs from apex-pdp ======== policy-apex-pdp | Waiting for mariadb port 3306... policy-apex-pdp | mariadb (172.17.0.3:3306) open policy-apex-pdp | Waiting for kafka port 9092... policy-apex-pdp | kafka (172.17.0.7:9092) open policy-apex-pdp | Waiting for pap port 6969... policy-apex-pdp | pap (172.17.0.10:6969) open policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' policy-apex-pdp | [2024-05-23T17:03:11.120+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] policy-apex-pdp | [2024-05-23T17:03:11.352+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-apex-pdp | allow.auto.create.topics = true policy-apex-pdp | auto.commit.interval.ms = 5000 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | auto.offset.reset = latest policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | check.crcs = true policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = consumer-73e96feb-bc6b-4940-a2e1-bccba6481d37-1 policy-apex-pdp | client.rack = policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | default.api.timeout.ms = 60000 policy-apex-pdp | enable.auto.commit = true policy-apex-pdp | exclude.internal.topics = true policy-apex-pdp | fetch.max.bytes = 52428800 policy-apex-pdp | fetch.max.wait.ms = 500 policy-apex-pdp | fetch.min.bytes = 1 policy-apex-pdp | group.id = 73e96feb-bc6b-4940-a2e1-bccba6481d37 policy-apex-pdp | group.instance.id = null policy-apex-pdp | heartbeat.interval.ms = 3000 policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | internal.leave.group.on.close = true policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-apex-pdp | isolation.level = read_uncommitted policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | max.partition.fetch.bytes = 1048576 policy-apex-pdp | max.poll.interval.ms = 300000 policy-apex-pdp | max.poll.records = 500 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-apex-pdp | receive.buffer.bytes = 65536 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.callback.handler.class = null policy-apex-pdp | sasl.login.class = null policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-apex-pdp | sasl.login.read.timeout.ms = null policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | sasl.mechanism = GSSAPI policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | security.protocol = PLAINTEXT policy-apex-pdp | security.providers = null policy-apex-pdp | send.buffer.bytes = 131072 policy-apex-pdp | session.timeout.ms = 45000 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | ssl.cipher.suites = null policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-apex-pdp | ssl.engine.factory.class = null policy-apex-pdp | ssl.key.password = null policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | ssl.keystore.certificate.chain = null policy-apex-pdp | ssl.keystore.key = null policy-apex-pdp | ssl.keystore.location = null policy-apex-pdp | ssl.keystore.password = null policy-apex-pdp | ssl.keystore.type = JKS policy-apex-pdp | ssl.protocol = TLSv1.3 policy-apex-pdp | ssl.provider = null policy-apex-pdp | ssl.secure.random.implementation = null policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | ssl.truststore.certificates = null policy-apex-pdp | ssl.truststore.location = null policy-apex-pdp | ssl.truststore.password = null policy-apex-pdp | ssl.truststore.type = JKS policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | policy-apex-pdp | [2024-05-23T17:03:11.635+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-apex-pdp | [2024-05-23T17:03:11.635+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-apex-pdp | [2024-05-23T17:03:11.635+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1716483791633 policy-apex-pdp | [2024-05-23T17:03:11.638+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-73e96feb-bc6b-4940-a2e1-bccba6481d37-1, groupId=73e96feb-bc6b-4940-a2e1-bccba6481d37] Subscribed to topic(s): policy-pdp-pap policy-apex-pdp | [2024-05-23T17:03:11.654+00:00|INFO|ServiceManager|main] service manager starting policy-apex-pdp | [2024-05-23T17:03:11.654+00:00|INFO|ServiceManager|main] service manager starting topics policy-apex-pdp | [2024-05-23T17:03:11.658+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=73e96feb-bc6b-4940-a2e1-bccba6481d37, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting policy-apex-pdp | [2024-05-23T17:03:11.718+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-apex-pdp | allow.auto.create.topics = true policy-apex-pdp | auto.commit.interval.ms = 5000 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | auto.offset.reset = latest policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | check.crcs = true policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = consumer-73e96feb-bc6b-4940-a2e1-bccba6481d37-2 policy-apex-pdp | client.rack = policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | default.api.timeout.ms = 60000 policy-apex-pdp | enable.auto.commit = true policy-apex-pdp | exclude.internal.topics = true policy-apex-pdp | fetch.max.bytes = 52428800 policy-apex-pdp | fetch.max.wait.ms = 500 policy-apex-pdp | fetch.min.bytes = 1 policy-apex-pdp | group.id = 73e96feb-bc6b-4940-a2e1-bccba6481d37 policy-apex-pdp | group.instance.id = null policy-apex-pdp | heartbeat.interval.ms = 3000 policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | internal.leave.group.on.close = true policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-apex-pdp | isolation.level = read_uncommitted policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | max.partition.fetch.bytes = 1048576 policy-apex-pdp | max.poll.interval.ms = 300000 policy-apex-pdp | max.poll.records = 500 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-apex-pdp | receive.buffer.bytes = 65536 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.callback.handler.class = null policy-apex-pdp | sasl.login.class = null policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-apex-pdp | sasl.login.read.timeout.ms = null policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | sasl.mechanism = GSSAPI policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | security.protocol = PLAINTEXT policy-apex-pdp | security.providers = null policy-apex-pdp | send.buffer.bytes = 131072 policy-apex-pdp | session.timeout.ms = 45000 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | ssl.cipher.suites = null policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-apex-pdp | ssl.engine.factory.class = null policy-apex-pdp | ssl.key.password = null policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | ssl.keystore.certificate.chain = null policy-apex-pdp | ssl.keystore.key = null policy-apex-pdp | ssl.keystore.location = null policy-apex-pdp | ssl.keystore.password = null policy-apex-pdp | ssl.keystore.type = JKS policy-apex-pdp | ssl.protocol = TLSv1.3 policy-apex-pdp | ssl.provider = null policy-apex-pdp | ssl.secure.random.implementation = null policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | ssl.truststore.certificates = null policy-apex-pdp | ssl.truststore.location = null policy-apex-pdp | ssl.truststore.password = null policy-apex-pdp | ssl.truststore.type = JKS policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | policy-apex-pdp | [2024-05-23T17:03:11.734+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-apex-pdp | [2024-05-23T17:03:11.734+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-apex-pdp | [2024-05-23T17:03:11.734+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1716483791734 policy-apex-pdp | [2024-05-23T17:03:11.734+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-73e96feb-bc6b-4940-a2e1-bccba6481d37-2, groupId=73e96feb-bc6b-4940-a2e1-bccba6481d37] Subscribed to topic(s): policy-pdp-pap policy-apex-pdp | [2024-05-23T17:03:11.735+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=970f8842-9db6-4104-8368-b369b3727de1, alive=false, publisher=null]]: starting policy-apex-pdp | [2024-05-23T17:03:11.761+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-apex-pdp | acks = -1 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | batch.size = 16384 policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | buffer.memory = 33554432 policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = producer-1 policy-apex-pdp | compression.type = none policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | delivery.timeout.ms = 120000 policy-apex-pdp | enable.idempotence = true policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-apex-pdp | linger.ms = 0 policy-apex-pdp | max.block.ms = 60000 policy-apex-pdp | max.in.flight.requests.per.connection = 5 policy-apex-pdp | max.request.size = 1048576 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metadata.max.idle.ms = 300000 policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partitioner.adaptive.partitioning.enable = true policy-apex-pdp | partitioner.availability.timeout.ms = 0 policy-apex-pdp | partitioner.class = null policy-apex-pdp | partitioner.ignore.keys = false policy-apex-pdp | receive.buffer.bytes = 32768 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retries = 2147483647 policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.callback.handler.class = null policy-apex-pdp | sasl.login.class = null policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-apex-pdp | sasl.login.read.timeout.ms = null policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | sasl.mechanism = GSSAPI policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | security.protocol = PLAINTEXT policy-apex-pdp | security.providers = null policy-apex-pdp | send.buffer.bytes = 131072 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | ssl.cipher.suites = null policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-apex-pdp | ssl.engine.factory.class = null policy-apex-pdp | ssl.key.password = null policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | ssl.keystore.certificate.chain = null policy-apex-pdp | ssl.keystore.key = null policy-apex-pdp | ssl.keystore.location = null policy-apex-pdp | ssl.keystore.password = null policy-apex-pdp | ssl.keystore.type = JKS policy-apex-pdp | ssl.protocol = TLSv1.3 policy-apex-pdp | ssl.provider = null policy-apex-pdp | ssl.secure.random.implementation = null policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | ssl.truststore.certificates = null policy-apex-pdp | ssl.truststore.location = null policy-apex-pdp | ssl.truststore.password = null policy-apex-pdp | ssl.truststore.type = JKS policy-apex-pdp | transaction.timeout.ms = 60000 policy-apex-pdp | transactional.id = null policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-apex-pdp | policy-apex-pdp | [2024-05-23T17:03:11.804+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-apex-pdp | [2024-05-23T17:03:11.837+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-apex-pdp | [2024-05-23T17:03:11.837+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-apex-pdp | [2024-05-23T17:03:11.838+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1716483791837 policy-apex-pdp | [2024-05-23T17:03:11.838+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=970f8842-9db6-4104-8368-b369b3727de1, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-apex-pdp | [2024-05-23T17:03:11.838+00:00|INFO|ServiceManager|main] service manager starting set alive policy-apex-pdp | [2024-05-23T17:03:11.838+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object policy-apex-pdp | [2024-05-23T17:03:11.840+00:00|INFO|ServiceManager|main] service manager starting topic sinks policy-apex-pdp | [2024-05-23T17:03:11.840+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher policy-apex-pdp | [2024-05-23T17:03:11.844+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener policy-apex-pdp | [2024-05-23T17:03:11.844+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher policy-apex-pdp | [2024-05-23T17:03:11.844+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher policy-apex-pdp | [2024-05-23T17:03:11.844+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=73e96feb-bc6b-4940-a2e1-bccba6481d37, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@60a2630a policy-apex-pdp | [2024-05-23T17:03:11.844+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=73e96feb-bc6b-4940-a2e1-bccba6481d37, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted policy-apex-pdp | [2024-05-23T17:03:11.844+00:00|INFO|ServiceManager|main] service manager starting Create REST server policy-apex-pdp | [2024-05-23T17:03:11.913+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: policy-apex-pdp | [] policy-apex-pdp | [2024-05-23T17:03:11.923+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"45293f6f-7a76-469c-a501-b3e87b0dbd1b","timestampMs":1716483791850,"name":"apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486","pdpGroup":"defaultGroup"} policy-apex-pdp | [2024-05-23T17:03:12.140+00:00|INFO|ServiceManager|main] service manager starting Rest Server policy-apex-pdp | [2024-05-23T17:03:12.140+00:00|INFO|ServiceManager|main] service manager starting policy-apex-pdp | [2024-05-23T17:03:12.140+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters policy-apex-pdp | [2024-05-23T17:03:12.140+00:00|INFO|JettyServletServer|main] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@72c927f1{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@1ac85b0c{/,null,STOPPED}, connector=RestServerParameters@63c5efee{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-apex-pdp | [2024-05-23T17:03:12.151+00:00|INFO|ServiceManager|main] service manager started policy-apex-pdp | [2024-05-23T17:03:12.151+00:00|INFO|ServiceManager|main] service manager started policy-apex-pdp | [2024-05-23T17:03:12.151+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. policy-apex-pdp | [2024-05-23T17:03:12.151+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@72c927f1{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@1ac85b0c{/,null,STOPPED}, connector=RestServerParameters@63c5efee{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-apex-pdp | [2024-05-23T17:03:12.377+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-73e96feb-bc6b-4940-a2e1-bccba6481d37-2, groupId=73e96feb-bc6b-4940-a2e1-bccba6481d37] Cluster ID: Ve7S-UWnTtqwNqAszmlFEA policy-apex-pdp | [2024-05-23T17:03:12.376+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: Ve7S-UWnTtqwNqAszmlFEA policy-apex-pdp | [2024-05-23T17:03:12.380+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-73e96feb-bc6b-4940-a2e1-bccba6481d37-2, groupId=73e96feb-bc6b-4940-a2e1-bccba6481d37] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-apex-pdp | [2024-05-23T17:03:12.380+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 policy-apex-pdp | [2024-05-23T17:03:12.388+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-73e96feb-bc6b-4940-a2e1-bccba6481d37-2, groupId=73e96feb-bc6b-4940-a2e1-bccba6481d37] (Re-)joining group policy-apex-pdp | [2024-05-23T17:03:12.419+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-73e96feb-bc6b-4940-a2e1-bccba6481d37-2, groupId=73e96feb-bc6b-4940-a2e1-bccba6481d37] Request joining group due to: need to re-join with the given member-id: consumer-73e96feb-bc6b-4940-a2e1-bccba6481d37-2-acec8750-84f7-4f09-8410-c7d8b6618f47 policy-apex-pdp | [2024-05-23T17:03:12.420+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-73e96feb-bc6b-4940-a2e1-bccba6481d37-2, groupId=73e96feb-bc6b-4940-a2e1-bccba6481d37] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-apex-pdp | [2024-05-23T17:03:12.420+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-73e96feb-bc6b-4940-a2e1-bccba6481d37-2, groupId=73e96feb-bc6b-4940-a2e1-bccba6481d37] (Re-)joining group policy-apex-pdp | [2024-05-23T17:03:12.924+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls policy-apex-pdp | [2024-05-23T17:03:12.925+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls policy-apex-pdp | [2024-05-23T17:03:15.432+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-73e96feb-bc6b-4940-a2e1-bccba6481d37-2, groupId=73e96feb-bc6b-4940-a2e1-bccba6481d37] Successfully joined group with generation Generation{generationId=1, memberId='consumer-73e96feb-bc6b-4940-a2e1-bccba6481d37-2-acec8750-84f7-4f09-8410-c7d8b6618f47', protocol='range'} policy-apex-pdp | [2024-05-23T17:03:15.441+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-73e96feb-bc6b-4940-a2e1-bccba6481d37-2, groupId=73e96feb-bc6b-4940-a2e1-bccba6481d37] Finished assignment for group at generation 1: {consumer-73e96feb-bc6b-4940-a2e1-bccba6481d37-2-acec8750-84f7-4f09-8410-c7d8b6618f47=Assignment(partitions=[policy-pdp-pap-0])} policy-apex-pdp | [2024-05-23T17:03:15.451+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-73e96feb-bc6b-4940-a2e1-bccba6481d37-2, groupId=73e96feb-bc6b-4940-a2e1-bccba6481d37] Successfully synced group in generation Generation{generationId=1, memberId='consumer-73e96feb-bc6b-4940-a2e1-bccba6481d37-2-acec8750-84f7-4f09-8410-c7d8b6618f47', protocol='range'} policy-apex-pdp | [2024-05-23T17:03:15.452+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-73e96feb-bc6b-4940-a2e1-bccba6481d37-2, groupId=73e96feb-bc6b-4940-a2e1-bccba6481d37] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-apex-pdp | [2024-05-23T17:03:15.454+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-73e96feb-bc6b-4940-a2e1-bccba6481d37-2, groupId=73e96feb-bc6b-4940-a2e1-bccba6481d37] Adding newly assigned partitions: policy-pdp-pap-0 policy-apex-pdp | [2024-05-23T17:03:15.464+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-73e96feb-bc6b-4940-a2e1-bccba6481d37-2, groupId=73e96feb-bc6b-4940-a2e1-bccba6481d37] Found no committed offset for partition policy-pdp-pap-0 policy-apex-pdp | [2024-05-23T17:03:15.475+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-73e96feb-bc6b-4940-a2e1-bccba6481d37-2, groupId=73e96feb-bc6b-4940-a2e1-bccba6481d37] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-apex-pdp | [2024-05-23T17:03:31.845+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"d6c34f5d-8a11-4bb1-95eb-9acce8c197d3","timestampMs":1716483811845,"name":"apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486","pdpGroup":"defaultGroup"} policy-apex-pdp | [2024-05-23T17:03:31.872+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"d6c34f5d-8a11-4bb1-95eb-9acce8c197d3","timestampMs":1716483811845,"name":"apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486","pdpGroup":"defaultGroup"} policy-apex-pdp | [2024-05-23T17:03:31.876+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2024-05-23T17:03:32.110+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"source":"pap-20edb6ef-4a9b-4653-b0c7-fe469441e743","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"4e121eb0-9c1d-4ab7-b0e3-12bd950eddc2","timestampMs":1716483812033,"name":"apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-05-23T17:03:32.125+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher policy-apex-pdp | [2024-05-23T17:03:32.125+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"c0bcdfed-643d-40c1-9ee3-b96a4502aae7","timestampMs":1716483812125,"name":"apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486","pdpGroup":"defaultGroup"} policy-apex-pdp | [2024-05-23T17:03:32.126+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"4e121eb0-9c1d-4ab7-b0e3-12bd950eddc2","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"a490bbfc-ad0d-41bf-96bd-2ac1c9eedf67","timestampMs":1716483812126,"name":"apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-05-23T17:03:32.143+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"c0bcdfed-643d-40c1-9ee3-b96a4502aae7","timestampMs":1716483812125,"name":"apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486","pdpGroup":"defaultGroup"} policy-apex-pdp | [2024-05-23T17:03:32.148+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2024-05-23T17:03:32.148+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"4e121eb0-9c1d-4ab7-b0e3-12bd950eddc2","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"a490bbfc-ad0d-41bf-96bd-2ac1c9eedf67","timestampMs":1716483812126,"name":"apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-05-23T17:03:32.148+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2024-05-23T17:03:32.175+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"source":"pap-20edb6ef-4a9b-4653-b0c7-fe469441e743","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"6ceeed4d-5f92-4b6a-afe6-20c1a43493ef","timestampMs":1716483812033,"name":"apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-05-23T17:03:32.177+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"6ceeed4d-5f92-4b6a-afe6-20c1a43493ef","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"458015b7-8401-449c-9eb1-b0d2021374d4","timestampMs":1716483812177,"name":"apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-05-23T17:03:32.195+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"6ceeed4d-5f92-4b6a-afe6-20c1a43493ef","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"458015b7-8401-449c-9eb1-b0d2021374d4","timestampMs":1716483812177,"name":"apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-05-23T17:03:32.196+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2024-05-23T17:03:32.266+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"source":"pap-20edb6ef-4a9b-4653-b0c7-fe469441e743","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"ccc20f04-2017-43bc-946c-f72ac157c659","timestampMs":1716483812179,"name":"apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-05-23T17:03:32.268+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"ccc20f04-2017-43bc-946c-f72ac157c659","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"58079e69-5dbf-42c4-9690-1ac5ee585864","timestampMs":1716483812268,"name":"apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-05-23T17:03:32.284+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"ccc20f04-2017-43bc-946c-f72ac157c659","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"58079e69-5dbf-42c4-9690-1ac5ee585864","timestampMs":1716483812268,"name":"apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-05-23T17:03:32.284+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2024-05-23T17:03:56.175+00:00|INFO|RequestLog|qtp739264372-33] 172.17.0.5 - policyadmin [23/May/2024:17:03:56 +0000] "GET /metrics HTTP/1.1" 200 10637 "-" "Prometheus/2.52.0" policy-apex-pdp | [2024-05-23T17:04:56.082+00:00|INFO|RequestLog|qtp739264372-28] 172.17.0.5 - policyadmin [23/May/2024:17:04:56 +0000] "GET /metrics HTTP/1.1" 200 10635 "-" "Prometheus/2.52.0" policy-apex-pdp | [2024-05-23T17:05:32.126+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","policies":[],"messageName":"PDP_STATUS","requestId":"4429337c-7b05-4d7d-89e8-045f147fc803","timestampMs":1716483932125,"name":"apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-05-23T17:05:32.145+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","policies":[],"messageName":"PDP_STATUS","requestId":"4429337c-7b05-4d7d-89e8-045f147fc803","timestampMs":1716483932125,"name":"apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-05-23T17:05:32.145+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS =================================== ======== Logs from api ======== policy-api | Waiting for mariadb port 3306... policy-api | mariadb (172.17.0.3:3306) open policy-api | Waiting for policy-db-migrator port 6824... policy-api | policy-db-migrator (172.17.0.6:6824) open policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml policy-api | policy-api | . ____ _ __ _ _ policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / policy-api | =========|_|==============|___/=/_/_/_/ policy-api | :: Spring Boot :: (v3.1.10) policy-api | policy-api | [2024-05-23T17:02:44.227+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final policy-api | [2024-05-23T17:02:44.338+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.11 with PID 22 (/app/api.jar started by policy in /opt/app/policy/api/bin) policy-api | [2024-05-23T17:02:44.340+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" policy-api | [2024-05-23T17:02:46.649+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-api | [2024-05-23T17:02:46.735+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 77 ms. Found 6 JPA repository interfaces. policy-api | [2024-05-23T17:02:47.206+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler policy-api | [2024-05-23T17:02:47.207+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler policy-api | [2024-05-23T17:02:47.943+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) policy-api | [2024-05-23T17:02:47.952+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-api | [2024-05-23T17:02:47.954+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-api | [2024-05-23T17:02:47.954+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.19] policy-api | [2024-05-23T17:02:48.050+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext policy-api | [2024-05-23T17:02:48.050+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3598 ms policy-api | [2024-05-23T17:02:48.488+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-api | [2024-05-23T17:02:48.547+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.2.Final policy-api | [2024-05-23T17:02:48.589+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-api | [2024-05-23T17:02:48.855+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-api | [2024-05-23T17:02:48.891+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-api | [2024-05-23T17:02:48.989+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@7718a40f policy-api | [2024-05-23T17:02:48.991+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-api | [2024-05-23T17:02:51.401+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-api | [2024-05-23T17:02:51.404+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-api | [2024-05-23T17:02:52.584+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml policy-api | [2024-05-23T17:02:53.543+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] policy-api | [2024-05-23T17:02:55.235+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-api | [2024-05-23T17:02:55.506+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@7f930614, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@6ef0a044, org.springframework.security.web.context.SecurityContextHolderFilter@231e5af, org.springframework.security.web.header.HeaderWriterFilter@4c48ccc4, org.springframework.security.web.authentication.logout.LogoutFilter@73d7b6b0, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@25b2d26a, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@56ed024b, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@5a26a14, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@73e505d5, org.springframework.security.web.access.ExceptionTranslationFilter@1d93bd2a, org.springframework.security.web.access.intercept.AuthorizationFilter@43cbc87f] policy-api | [2024-05-23T17:02:56.574+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' policy-api | [2024-05-23T17:02:56.711+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-api | [2024-05-23T17:02:56.733+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' policy-api | [2024-05-23T17:02:56.753+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 13.25 seconds (process running for 14.0) policy-api | [2024-05-23T17:03:39.964+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-api | [2024-05-23T17:03:39.964+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' policy-api | [2024-05-23T17:03:39.967+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 3 ms policy-api | [2024-05-23T17:03:51.594+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-3] ***** OrderedServiceImpl implementers: policy-api | [] =================================== ======== Logs from csit-tests ======== policy-csit | Invoking the robot tests from: pap-test.robot pap-slas.robot policy-csit | Run Robot test policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates policy-csit | -v POLICY_API_IP:policy-api:6969 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 policy-csit | -v APEX_IP:policy-apex-pdp:6969 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 policy-csit | -v KAFKA_IP:kafka:9092 policy-csit | -v PROMETHEUS_IP:prometheus:9090 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 policy-csit | -v TEMP_FOLDER:/tmp/distribution policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 policy-csit | -v CLAMP_K8S_TEST: policy-csit | Starting Robot test suites ... policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas.Pap-Test policy-csit | ============================================================================== policy-csit | LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | LoadNodeTemplates :: Create node templates in database using speci... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Healthcheck :: Verify policy pap health check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Consolidated Healthcheck :: Verify policy consolidated health check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Metrics :: Verify policy pap is exporting prometheus metrics | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | DeployPdpGroups :: Deploy policies in PdpGroups | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Pap-Test & Pap-Slas.Pap-Test | PASS | policy-csit | 22 tests, 22 passed, 0 failed policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas.Pap-Slas policy-csit | ============================================================================== policy-csit | WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Pap-Test & Pap-Slas.Pap-Slas | PASS | policy-csit | 8 tests, 8 passed, 0 failed policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas | PASS | policy-csit | 30 tests, 30 passed, 0 failed policy-csit | ============================================================================== policy-csit | Output: /tmp/results/output.xml policy-csit | Log: /tmp/results/log.html policy-csit | Report: /tmp/results/report.html policy-csit | RESULT: 0 =================================== ======== Logs from policy-db-migrator ======== policy-db-migrator | Waiting for mariadb port 3306... policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused policy-db-migrator | Connection to mariadb (172.17.0.3) 3306 port [tcp/mysql] succeeded! policy-db-migrator | 321 blocks policy-db-migrator | Preparing upgrade release version: 0800 policy-db-migrator | Preparing upgrade release version: 0900 policy-db-migrator | Preparing upgrade release version: 1000 policy-db-migrator | Preparing upgrade release version: 1100 policy-db-migrator | Preparing upgrade release version: 1200 policy-db-migrator | Preparing upgrade release version: 1300 policy-db-migrator | Done policy-db-migrator | name version policy-db-migrator | policyadmin 0 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 policy-db-migrator | upgrade: 0 -> 1300 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0450-pdpgroup.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0460-pdppolicystatus.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0470-pdp.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0480-pdpstatistics.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0500-pdpsubgroup.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0570-toscadatatype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0580-toscadatatypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0600-toscanodetemplate.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0610-toscanodetemplates.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0630-toscanodetype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0640-toscanodetypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0660-toscaparameter.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0670-toscapolicies.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0690-toscapolicy.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0700-toscapolicytype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0710-toscapolicytypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0730-toscaproperty.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0770-toscarequirement.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0780-toscarequirements.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0820-toscatrigger.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0100-pdp.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0130-pdpstatistics.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql policy-db-migrator | -------------- policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0150-pdpstatistics.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql policy-db-migrator | -------------- policy-db-migrator | UPDATE jpapdpstatistics_enginestats a policy-db-migrator | JOIN pdpstatistics b policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp policy-db-migrator | SET a.id = b.id policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0210-sequence.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0220-sequence.sql policy-db-migrator | -------------- policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0120-toscatrigger.sql policy-db-migrator | -------------- policy-db-migrator | DROP TABLE IF EXISTS toscatrigger policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0140-toscaparameter.sql policy-db-migrator | -------------- policy-db-migrator | DROP TABLE IF EXISTS toscaparameter policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0150-toscaproperty.sql policy-db-migrator | -------------- policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | DROP TABLE IF EXISTS toscaproperty policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0100-upgrade.sql policy-db-migrator | -------------- policy-db-migrator | select 'upgrade to 1100 completed' as msg policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | msg policy-db-migrator | upgrade to 1100 completed policy-db-migrator | policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-db-migrator | -------------- policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0120-audit_sequence.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0130-statistics_sequence.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | TRUNCATE TABLE sequence policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0100-pdpstatistics.sql policy-db-migrator | -------------- policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | DROP TABLE pdpstatistics policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-db-migrator | -------------- policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0120-statistics_sequence.sql policy-db-migrator | -------------- policy-db-migrator | DROP TABLE statistics_sequence policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policyadmin: OK: upgrade (1300) policy-db-migrator | name version policy-db-migrator | policyadmin 1300 policy-db-migrator | ID script operation from_version to_version tag success atTime policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:34 policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:34 policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:34 policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:34 policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:34 policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:34 policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:34 policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:34 policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:34 policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:34 policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:35 policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:35 policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:35 policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:35 policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:35 policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:35 policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:35 policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:35 policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:35 policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:35 policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:35 policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:35 policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:35 policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:35 policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:35 policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:35 policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:35 policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:35 policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:35 policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:35 policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:35 policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:36 policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:36 policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:36 policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:36 policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:36 policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:36 policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:36 policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:36 policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:36 policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:36 policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:36 policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:36 policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:36 policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:36 policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:36 policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:37 policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:37 policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:37 policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:37 policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:37 policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:37 policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:37 policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:37 policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:37 policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:37 policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:37 policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:37 policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:37 policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:37 policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:37 policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:37 policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:37 policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:37 policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:37 policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:37 policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:37 policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:37 policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:38 policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:38 policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:38 policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:38 policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:38 policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:38 policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:38 policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:38 policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:38 policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:38 policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:38 policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:38 policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:38 policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:38 policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:38 policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:38 policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:38 policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:39 policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:39 policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:39 policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:39 policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:39 policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:39 policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:39 policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:39 policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:39 policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:39 policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2305241702340800u 1 2024-05-23 17:02:39 policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 2305241702340900u 1 2024-05-23 17:02:40 policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 2305241702340900u 1 2024-05-23 17:02:40 policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 2305241702340900u 1 2024-05-23 17:02:40 policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 2305241702340900u 1 2024-05-23 17:02:40 policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 2305241702340900u 1 2024-05-23 17:02:40 policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 2305241702340900u 1 2024-05-23 17:02:40 policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2305241702340900u 1 2024-05-23 17:02:40 policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2305241702340900u 1 2024-05-23 17:02:40 policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2305241702340900u 1 2024-05-23 17:02:40 policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 2305241702340900u 1 2024-05-23 17:02:40 policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 2305241702340900u 1 2024-05-23 17:02:40 policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 2305241702340900u 1 2024-05-23 17:02:40 policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 2305241702340900u 1 2024-05-23 17:02:40 policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 2305241702341000u 1 2024-05-23 17:02:40 policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 2305241702341000u 1 2024-05-23 17:02:40 policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 2305241702341000u 1 2024-05-23 17:02:40 policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 2305241702341000u 1 2024-05-23 17:02:40 policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 2305241702341000u 1 2024-05-23 17:02:40 policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 2305241702341000u 1 2024-05-23 17:02:41 policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 2305241702341000u 1 2024-05-23 17:02:41 policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 2305241702341000u 1 2024-05-23 17:02:41 policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 2305241702341000u 1 2024-05-23 17:02:41 policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 2305241702341100u 1 2024-05-23 17:02:41 policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 2305241702341200u 1 2024-05-23 17:02:41 policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 2305241702341200u 1 2024-05-23 17:02:41 policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 2305241702341200u 1 2024-05-23 17:02:41 policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 2305241702341200u 1 2024-05-23 17:02:41 policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 2305241702341300u 1 2024-05-23 17:02:41 policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 2305241702341300u 1 2024-05-23 17:02:41 policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 2305241702341300u 1 2024-05-23 17:02:41 policy-db-migrator | policyadmin: OK @ 1300 =================================== ======== Logs from pap ======== policy-pap | Waiting for mariadb port 3306... policy-pap | mariadb (172.17.0.3:3306) open policy-pap | Waiting for kafka port 9092... policy-pap | kafka (172.17.0.7:9092) open policy-pap | Waiting for api port 6969... policy-pap | api (172.17.0.9:6969) open policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json policy-pap | policy-pap | . ____ _ __ _ _ policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / policy-pap | =========|_|==============|___/=/_/_/_/ policy-pap | :: Spring Boot :: (v3.1.10) policy-pap | policy-pap | [2024-05-23T17:02:59.334+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final policy-pap | [2024-05-23T17:02:59.403+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.11 with PID 36 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) policy-pap | [2024-05-23T17:02:59.404+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" policy-pap | [2024-05-23T17:03:01.756+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-pap | [2024-05-23T17:03:01.861+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 94 ms. Found 7 JPA repository interfaces. policy-pap | [2024-05-23T17:03:02.377+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler policy-pap | [2024-05-23T17:03:02.378+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler policy-pap | [2024-05-23T17:03:03.078+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) policy-pap | [2024-05-23T17:03:03.089+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-pap | [2024-05-23T17:03:03.091+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-pap | [2024-05-23T17:03:03.091+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.19] policy-pap | [2024-05-23T17:03:03.196+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext policy-pap | [2024-05-23T17:03:03.196+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3713 ms policy-pap | [2024-05-23T17:03:03.627+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-pap | [2024-05-23T17:03:03.683+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 5.6.15.Final policy-pap | [2024-05-23T17:03:04.094+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-pap | [2024-05-23T17:03:04.201+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@72f8ae0c policy-pap | [2024-05-23T17:03:04.204+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-pap | [2024-05-23T17:03:04.234+00:00|INFO|Dialect|main] HHH000400: Using dialect: org.hibernate.dialect.MariaDB106Dialect policy-pap | [2024-05-23T17:03:05.887+00:00|INFO|JtaPlatformInitiator|main] HHH000490: Using JtaPlatform implementation: [org.hibernate.engine.transaction.jta.platform.internal.NoJtaPlatform] policy-pap | [2024-05-23T17:03:05.898+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-pap | [2024-05-23T17:03:06.401+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository policy-pap | [2024-05-23T17:03:06.817+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository policy-pap | [2024-05-23T17:03:06.955+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository policy-pap | [2024-05-23T17:03:07.227+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-6eb2b9f6-cc1f-4668-b97f-fa19dc06347c-1 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = 6eb2b9f6-cc1f-4668-b97f-fa19dc06347c policy-pap | group.instance.id = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2024-05-23T17:03:07.403+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-pap | [2024-05-23T17:03:07.404+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-pap | [2024-05-23T17:03:07.404+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1716483787402 policy-pap | [2024-05-23T17:03:07.407+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-6eb2b9f6-cc1f-4668-b97f-fa19dc06347c-1, groupId=6eb2b9f6-cc1f-4668-b97f-fa19dc06347c] Subscribed to topic(s): policy-pdp-pap policy-pap | [2024-05-23T17:03:07.408+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-policy-pap-2 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = policy-pap policy-pap | group.instance.id = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2024-05-23T17:03:07.414+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-pap | [2024-05-23T17:03:07.414+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-pap | [2024-05-23T17:03:07.414+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1716483787414 policy-pap | [2024-05-23T17:03:07.414+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-pap | [2024-05-23T17:03:07.804+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json policy-pap | [2024-05-23T17:03:07.973+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-pap | [2024-05-23T17:03:08.321+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@41abee65, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@6fc6f68f, org.springframework.security.web.context.SecurityContextHolderFilter@5ae16aa, org.springframework.security.web.header.HeaderWriterFilter@5ffdd510, org.springframework.security.web.authentication.logout.LogoutFilter@4fd63c43, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@40db6136, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@3051e476, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@6d9ee75a, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@297dff3a, org.springframework.security.web.access.ExceptionTranslationFilter@29dfc68f, org.springframework.security.web.access.intercept.AuthorizationFilter@60b4d934] policy-pap | [2024-05-23T17:03:09.210+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' policy-pap | [2024-05-23T17:03:09.360+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-pap | [2024-05-23T17:03:09.384+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' policy-pap | [2024-05-23T17:03:09.406+00:00|INFO|ServiceManager|main] Policy PAP starting policy-pap | [2024-05-23T17:03:09.407+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry policy-pap | [2024-05-23T17:03:09.408+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters policy-pap | [2024-05-23T17:03:09.409+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener policy-pap | [2024-05-23T17:03:09.409+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher policy-pap | [2024-05-23T17:03:09.410+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher policy-pap | [2024-05-23T17:03:09.410+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher policy-pap | [2024-05-23T17:03:09.413+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=6eb2b9f6-cc1f-4668-b97f-fa19dc06347c, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@66779435 policy-pap | [2024-05-23T17:03:09.438+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=6eb2b9f6-cc1f-4668-b97f-fa19dc06347c, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2024-05-23T17:03:09.440+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-6eb2b9f6-cc1f-4668-b97f-fa19dc06347c-3 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = 6eb2b9f6-cc1f-4668-b97f-fa19dc06347c policy-pap | group.instance.id = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2024-05-23T17:03:09.447+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-pap | [2024-05-23T17:03:09.447+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-pap | [2024-05-23T17:03:09.447+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1716483789446 policy-pap | [2024-05-23T17:03:09.447+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-6eb2b9f6-cc1f-4668-b97f-fa19dc06347c-3, groupId=6eb2b9f6-cc1f-4668-b97f-fa19dc06347c] Subscribed to topic(s): policy-pdp-pap policy-pap | [2024-05-23T17:03:09.450+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher policy-pap | [2024-05-23T17:03:09.450+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=7ae1464f-d13f-45e5-b087-a4861bd7813f, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@64eb14da policy-pap | [2024-05-23T17:03:09.450+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=7ae1464f-d13f-45e5-b087-a4861bd7813f, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2024-05-23T17:03:09.451+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-policy-pap-4 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = policy-pap policy-pap | group.instance.id = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2024-05-23T17:03:09.459+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-pap | [2024-05-23T17:03:09.459+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-pap | [2024-05-23T17:03:09.459+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1716483789459 policy-pap | [2024-05-23T17:03:09.460+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-pap | [2024-05-23T17:03:09.460+00:00|INFO|ServiceManager|main] Policy PAP starting topics policy-pap | [2024-05-23T17:03:09.461+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=7ae1464f-d13f-45e5-b087-a4861bd7813f, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2024-05-23T17:03:09.461+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=6eb2b9f6-cc1f-4668-b97f-fa19dc06347c, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2024-05-23T17:03:09.461+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=3539d1eb-b7b7-4649-88d7-8e32c5d0d576, alive=false, publisher=null]]: starting policy-pap | [2024-05-23T17:03:09.485+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-pap | acks = -1 policy-pap | auto.include.jmx.reporter = true policy-pap | batch.size = 16384 policy-pap | bootstrap.servers = [kafka:9092] policy-pap | buffer.memory = 33554432 policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = producer-1 policy-pap | compression.type = none policy-pap | connections.max.idle.ms = 540000 policy-pap | delivery.timeout.ms = 120000 policy-pap | enable.idempotence = true policy-pap | interceptor.classes = [] policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | linger.ms = 0 policy-pap | max.block.ms = 60000 policy-pap | max.in.flight.requests.per.connection = 5 policy-pap | max.request.size = 1048576 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.max.idle.ms = 300000 policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partitioner.adaptive.partitioning.enable = true policy-pap | partitioner.availability.timeout.ms = 0 policy-pap | partitioner.class = null policy-pap | partitioner.ignore.keys = false policy-pap | receive.buffer.bytes = 32768 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retries = 2147483647 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | transaction.timeout.ms = 60000 policy-pap | transactional.id = null policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | policy-pap | [2024-05-23T17:03:09.499+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-pap | [2024-05-23T17:03:09.522+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-pap | [2024-05-23T17:03:09.522+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-pap | [2024-05-23T17:03:09.522+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1716483789522 policy-pap | [2024-05-23T17:03:09.523+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=3539d1eb-b7b7-4649-88d7-8e32c5d0d576, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-pap | [2024-05-23T17:03:09.523+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=456abe0a-0ac5-4830-b5f9-793140af6ecd, alive=false, publisher=null]]: starting policy-pap | [2024-05-23T17:03:09.524+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-pap | acks = -1 policy-pap | auto.include.jmx.reporter = true policy-pap | batch.size = 16384 policy-pap | bootstrap.servers = [kafka:9092] policy-pap | buffer.memory = 33554432 policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = producer-2 policy-pap | compression.type = none policy-pap | connections.max.idle.ms = 540000 policy-pap | delivery.timeout.ms = 120000 policy-pap | enable.idempotence = true policy-pap | interceptor.classes = [] policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | linger.ms = 0 policy-pap | max.block.ms = 60000 policy-pap | max.in.flight.requests.per.connection = 5 policy-pap | max.request.size = 1048576 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.max.idle.ms = 300000 policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partitioner.adaptive.partitioning.enable = true policy-pap | partitioner.availability.timeout.ms = 0 policy-pap | partitioner.class = null policy-pap | partitioner.ignore.keys = false policy-pap | receive.buffer.bytes = 32768 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retries = 2147483647 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | transaction.timeout.ms = 60000 policy-pap | transactional.id = null policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | policy-pap | [2024-05-23T17:03:09.525+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. policy-pap | [2024-05-23T17:03:09.528+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-pap | [2024-05-23T17:03:09.529+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-pap | [2024-05-23T17:03:09.529+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1716483789528 policy-pap | [2024-05-23T17:03:09.529+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=456abe0a-0ac5-4830-b5f9-793140af6ecd, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-pap | [2024-05-23T17:03:09.529+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator policy-pap | [2024-05-23T17:03:09.529+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher policy-pap | [2024-05-23T17:03:09.536+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher policy-pap | [2024-05-23T17:03:09.536+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers policy-pap | [2024-05-23T17:03:09.539+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers policy-pap | [2024-05-23T17:03:09.539+00:00|INFO|TimerManager|Thread-9] timer manager update started policy-pap | [2024-05-23T17:03:09.541+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock policy-pap | [2024-05-23T17:03:09.541+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests policy-pap | [2024-05-23T17:03:09.542+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer policy-pap | [2024-05-23T17:03:09.543+00:00|INFO|TimerManager|Thread-10] timer manager state-change started policy-pap | [2024-05-23T17:03:09.544+00:00|INFO|ServiceManager|main] Policy PAP started policy-pap | [2024-05-23T17:03:09.546+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 11.062 seconds (process running for 11.859) policy-pap | [2024-05-23T17:03:09.991+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6eb2b9f6-cc1f-4668-b97f-fa19dc06347c-3, groupId=6eb2b9f6-cc1f-4668-b97f-fa19dc06347c] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-pap | [2024-05-23T17:03:09.998+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6eb2b9f6-cc1f-4668-b97f-fa19dc06347c-3, groupId=6eb2b9f6-cc1f-4668-b97f-fa19dc06347c] Cluster ID: Ve7S-UWnTtqwNqAszmlFEA policy-pap | [2024-05-23T17:03:09.998+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: Ve7S-UWnTtqwNqAszmlFEA policy-pap | [2024-05-23T17:03:09.999+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: Ve7S-UWnTtqwNqAszmlFEA policy-pap | [2024-05-23T17:03:10.090+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-05-23T17:03:10.091+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: Ve7S-UWnTtqwNqAszmlFEA policy-pap | [2024-05-23T17:03:10.096+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6eb2b9f6-cc1f-4668-b97f-fa19dc06347c-3, groupId=6eb2b9f6-cc1f-4668-b97f-fa19dc06347c] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-05-23T17:03:10.133+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 1 with epoch 0 policy-pap | [2024-05-23T17:03:10.133+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 0 with epoch 0 policy-pap | [2024-05-23T17:03:10.209+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-pap | [2024-05-23T17:03:10.209+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6eb2b9f6-cc1f-4668-b97f-fa19dc06347c-3, groupId=6eb2b9f6-cc1f-4668-b97f-fa19dc06347c] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-05-23T17:03:10.314+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-pap | [2024-05-23T17:03:10.339+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6eb2b9f6-cc1f-4668-b97f-fa19dc06347c-3, groupId=6eb2b9f6-cc1f-4668-b97f-fa19dc06347c] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-05-23T17:03:10.424+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-05-23T17:03:10.464+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6eb2b9f6-cc1f-4668-b97f-fa19dc06347c-3, groupId=6eb2b9f6-cc1f-4668-b97f-fa19dc06347c] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-05-23T17:03:10.543+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-05-23T17:03:10.582+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6eb2b9f6-cc1f-4668-b97f-fa19dc06347c-3, groupId=6eb2b9f6-cc1f-4668-b97f-fa19dc06347c] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-05-23T17:03:10.683+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-05-23T17:03:10.707+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6eb2b9f6-cc1f-4668-b97f-fa19dc06347c-3, groupId=6eb2b9f6-cc1f-4668-b97f-fa19dc06347c] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-05-23T17:03:10.794+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-05-23T17:03:10.816+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6eb2b9f6-cc1f-4668-b97f-fa19dc06347c-3, groupId=6eb2b9f6-cc1f-4668-b97f-fa19dc06347c] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-05-23T17:03:10.901+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-05-23T17:03:10.934+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6eb2b9f6-cc1f-4668-b97f-fa19dc06347c-3, groupId=6eb2b9f6-cc1f-4668-b97f-fa19dc06347c] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-05-23T17:03:11.007+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-05-23T17:03:11.053+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6eb2b9f6-cc1f-4668-b97f-fa19dc06347c-3, groupId=6eb2b9f6-cc1f-4668-b97f-fa19dc06347c] Error while fetching metadata with correlation id 20 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-05-23T17:03:11.117+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 20 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-05-23T17:03:11.164+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6eb2b9f6-cc1f-4668-b97f-fa19dc06347c-3, groupId=6eb2b9f6-cc1f-4668-b97f-fa19dc06347c] Error while fetching metadata with correlation id 22 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-05-23T17:03:11.227+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 22 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-05-23T17:03:11.274+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6eb2b9f6-cc1f-4668-b97f-fa19dc06347c-3, groupId=6eb2b9f6-cc1f-4668-b97f-fa19dc06347c] Error while fetching metadata with correlation id 24 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-05-23T17:03:11.337+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 24 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-05-23T17:03:11.392+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6eb2b9f6-cc1f-4668-b97f-fa19dc06347c-3, groupId=6eb2b9f6-cc1f-4668-b97f-fa19dc06347c] Error while fetching metadata with correlation id 26 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-05-23T17:03:11.454+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 26 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-05-23T17:03:11.506+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6eb2b9f6-cc1f-4668-b97f-fa19dc06347c-3, groupId=6eb2b9f6-cc1f-4668-b97f-fa19dc06347c] Error while fetching metadata with correlation id 28 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-05-23T17:03:11.561+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 28 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-05-23T17:03:11.648+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6eb2b9f6-cc1f-4668-b97f-fa19dc06347c-3, groupId=6eb2b9f6-cc1f-4668-b97f-fa19dc06347c] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-pap | [2024-05-23T17:03:11.656+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6eb2b9f6-cc1f-4668-b97f-fa19dc06347c-3, groupId=6eb2b9f6-cc1f-4668-b97f-fa19dc06347c] (Re-)joining group policy-pap | [2024-05-23T17:03:11.671+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-pap | [2024-05-23T17:03:11.679+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-pap | [2024-05-23T17:03:11.697+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6eb2b9f6-cc1f-4668-b97f-fa19dc06347c-3, groupId=6eb2b9f6-cc1f-4668-b97f-fa19dc06347c] Request joining group due to: need to re-join with the given member-id: consumer-6eb2b9f6-cc1f-4668-b97f-fa19dc06347c-3-a95c645a-8554-4ece-a128-3e833d43c091 policy-pap | [2024-05-23T17:03:11.698+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6eb2b9f6-cc1f-4668-b97f-fa19dc06347c-3, groupId=6eb2b9f6-cc1f-4668-b97f-fa19dc06347c] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-pap | [2024-05-23T17:03:11.698+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6eb2b9f6-cc1f-4668-b97f-fa19dc06347c-3, groupId=6eb2b9f6-cc1f-4668-b97f-fa19dc06347c] (Re-)joining group policy-pap | [2024-05-23T17:03:11.698+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-333eab6c-5cf1-4372-a66d-fcebf1c2237c policy-pap | [2024-05-23T17:03:11.699+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-pap | [2024-05-23T17:03:11.699+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-pap | [2024-05-23T17:03:14.734+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6eb2b9f6-cc1f-4668-b97f-fa19dc06347c-3, groupId=6eb2b9f6-cc1f-4668-b97f-fa19dc06347c] Successfully joined group with generation Generation{generationId=1, memberId='consumer-6eb2b9f6-cc1f-4668-b97f-fa19dc06347c-3-a95c645a-8554-4ece-a128-3e833d43c091', protocol='range'} policy-pap | [2024-05-23T17:03:14.737+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-333eab6c-5cf1-4372-a66d-fcebf1c2237c', protocol='range'} policy-pap | [2024-05-23T17:03:14.745+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-333eab6c-5cf1-4372-a66d-fcebf1c2237c=Assignment(partitions=[policy-pdp-pap-0])} policy-pap | [2024-05-23T17:03:14.745+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6eb2b9f6-cc1f-4668-b97f-fa19dc06347c-3, groupId=6eb2b9f6-cc1f-4668-b97f-fa19dc06347c] Finished assignment for group at generation 1: {consumer-6eb2b9f6-cc1f-4668-b97f-fa19dc06347c-3-a95c645a-8554-4ece-a128-3e833d43c091=Assignment(partitions=[policy-pdp-pap-0])} policy-pap | [2024-05-23T17:03:14.795+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-333eab6c-5cf1-4372-a66d-fcebf1c2237c', protocol='range'} policy-pap | [2024-05-23T17:03:14.795+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-pap | [2024-05-23T17:03:14.796+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6eb2b9f6-cc1f-4668-b97f-fa19dc06347c-3, groupId=6eb2b9f6-cc1f-4668-b97f-fa19dc06347c] Successfully synced group in generation Generation{generationId=1, memberId='consumer-6eb2b9f6-cc1f-4668-b97f-fa19dc06347c-3-a95c645a-8554-4ece-a128-3e833d43c091', protocol='range'} policy-pap | [2024-05-23T17:03:14.797+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6eb2b9f6-cc1f-4668-b97f-fa19dc06347c-3, groupId=6eb2b9f6-cc1f-4668-b97f-fa19dc06347c] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-pap | [2024-05-23T17:03:14.798+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 policy-pap | [2024-05-23T17:03:14.798+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6eb2b9f6-cc1f-4668-b97f-fa19dc06347c-3, groupId=6eb2b9f6-cc1f-4668-b97f-fa19dc06347c] Adding newly assigned partitions: policy-pdp-pap-0 policy-pap | [2024-05-23T17:03:14.820+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 policy-pap | [2024-05-23T17:03:14.820+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6eb2b9f6-cc1f-4668-b97f-fa19dc06347c-3, groupId=6eb2b9f6-cc1f-4668-b97f-fa19dc06347c] Found no committed offset for partition policy-pdp-pap-0 policy-pap | [2024-05-23T17:03:14.844+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6eb2b9f6-cc1f-4668-b97f-fa19dc06347c-3, groupId=6eb2b9f6-cc1f-4668-b97f-fa19dc06347c] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | [2024-05-23T17:03:14.844+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | [2024-05-23T17:03:31.894+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-pdp-pap] ***** OrderedServiceImpl implementers: policy-pap | [] policy-pap | [2024-05-23T17:03:31.896+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"d6c34f5d-8a11-4bb1-95eb-9acce8c197d3","timestampMs":1716483811845,"name":"apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486","pdpGroup":"defaultGroup"} policy-pap | [2024-05-23T17:03:31.896+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"d6c34f5d-8a11-4bb1-95eb-9acce8c197d3","timestampMs":1716483811845,"name":"apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486","pdpGroup":"defaultGroup"} policy-pap | [2024-05-23T17:03:31.906+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-pap | [2024-05-23T17:03:32.054+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486 PdpUpdate starting policy-pap | [2024-05-23T17:03:32.054+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486 PdpUpdate starting listener policy-pap | [2024-05-23T17:03:32.055+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486 PdpUpdate starting timer policy-pap | [2024-05-23T17:03:32.055+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=4e121eb0-9c1d-4ab7-b0e3-12bd950eddc2, expireMs=1716483842055] policy-pap | [2024-05-23T17:03:32.057+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486 PdpUpdate starting enqueue policy-pap | [2024-05-23T17:03:32.057+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486 PdpUpdate started policy-pap | [2024-05-23T17:03:32.057+00:00|INFO|TimerManager|Thread-9] update timer waiting 29998ms Timer [name=4e121eb0-9c1d-4ab7-b0e3-12bd950eddc2, expireMs=1716483842055] policy-pap | [2024-05-23T17:03:32.063+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-20edb6ef-4a9b-4653-b0c7-fe469441e743","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"4e121eb0-9c1d-4ab7-b0e3-12bd950eddc2","timestampMs":1716483812033,"name":"apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-05-23T17:03:32.107+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-20edb6ef-4a9b-4653-b0c7-fe469441e743","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"4e121eb0-9c1d-4ab7-b0e3-12bd950eddc2","timestampMs":1716483812033,"name":"apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-05-23T17:03:32.107+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-20edb6ef-4a9b-4653-b0c7-fe469441e743","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"4e121eb0-9c1d-4ab7-b0e3-12bd950eddc2","timestampMs":1716483812033,"name":"apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-05-23T17:03:32.110+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2024-05-23T17:03:32.110+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2024-05-23T17:03:32.143+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"c0bcdfed-643d-40c1-9ee3-b96a4502aae7","timestampMs":1716483812125,"name":"apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486","pdpGroup":"defaultGroup"} policy-pap | [2024-05-23T17:03:32.144+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-pap | [2024-05-23T17:03:32.144+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"4e121eb0-9c1d-4ab7-b0e3-12bd950eddc2","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"a490bbfc-ad0d-41bf-96bd-2ac1c9eedf67","timestampMs":1716483812126,"name":"apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-05-23T17:03:32.144+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486 PdpUpdate stopping policy-pap | [2024-05-23T17:03:32.145+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486 PdpUpdate stopping enqueue policy-pap | [2024-05-23T17:03:32.145+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486 PdpUpdate stopping timer policy-pap | [2024-05-23T17:03:32.145+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=4e121eb0-9c1d-4ab7-b0e3-12bd950eddc2, expireMs=1716483842055] policy-pap | [2024-05-23T17:03:32.145+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486 PdpUpdate stopping listener policy-pap | [2024-05-23T17:03:32.145+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486 PdpUpdate stopped policy-pap | [2024-05-23T17:03:32.146+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"c0bcdfed-643d-40c1-9ee3-b96a4502aae7","timestampMs":1716483812125,"name":"apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486","pdpGroup":"defaultGroup"} policy-pap | [2024-05-23T17:03:32.152+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486 PdpUpdate successful policy-pap | [2024-05-23T17:03:32.152+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486 start publishing next request policy-pap | [2024-05-23T17:03:32.152+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486 PdpStateChange starting policy-pap | [2024-05-23T17:03:32.152+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486 PdpStateChange starting listener policy-pap | [2024-05-23T17:03:32.152+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486 PdpStateChange starting timer policy-pap | [2024-05-23T17:03:32.152+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=6ceeed4d-5f92-4b6a-afe6-20c1a43493ef, expireMs=1716483842152] policy-pap | [2024-05-23T17:03:32.153+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486 PdpStateChange starting enqueue policy-pap | [2024-05-23T17:03:32.153+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486 PdpStateChange started policy-pap | [2024-05-23T17:03:32.153+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 29999ms Timer [name=6ceeed4d-5f92-4b6a-afe6-20c1a43493ef, expireMs=1716483842152] policy-pap | [2024-05-23T17:03:32.154+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-20edb6ef-4a9b-4653-b0c7-fe469441e743","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"6ceeed4d-5f92-4b6a-afe6-20c1a43493ef","timestampMs":1716483812033,"name":"apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-05-23T17:03:32.196+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-20edb6ef-4a9b-4653-b0c7-fe469441e743","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"6ceeed4d-5f92-4b6a-afe6-20c1a43493ef","timestampMs":1716483812033,"name":"apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-05-23T17:03:32.196+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE policy-pap | [2024-05-23T17:03:32.202+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"6ceeed4d-5f92-4b6a-afe6-20c1a43493ef","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"458015b7-8401-449c-9eb1-b0d2021374d4","timestampMs":1716483812177,"name":"apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-05-23T17:03:32.249+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"4e121eb0-9c1d-4ab7-b0e3-12bd950eddc2","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"a490bbfc-ad0d-41bf-96bd-2ac1c9eedf67","timestampMs":1716483812126,"name":"apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-05-23T17:03:32.250+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486 PdpStateChange stopping policy-pap | [2024-05-23T17:03:32.250+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 4e121eb0-9c1d-4ab7-b0e3-12bd950eddc2 policy-pap | [2024-05-23T17:03:32.250+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486 PdpStateChange stopping enqueue policy-pap | [2024-05-23T17:03:32.250+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486 PdpStateChange stopping timer policy-pap | [2024-05-23T17:03:32.251+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=6ceeed4d-5f92-4b6a-afe6-20c1a43493ef, expireMs=1716483842152] policy-pap | [2024-05-23T17:03:32.251+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486 PdpStateChange stopping listener policy-pap | [2024-05-23T17:03:32.251+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486 PdpStateChange stopped policy-pap | [2024-05-23T17:03:32.251+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486 PdpStateChange successful policy-pap | [2024-05-23T17:03:32.251+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486 start publishing next request policy-pap | [2024-05-23T17:03:32.251+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486 PdpUpdate starting policy-pap | [2024-05-23T17:03:32.252+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486 PdpUpdate starting listener policy-pap | [2024-05-23T17:03:32.252+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486 PdpUpdate starting timer policy-pap | [2024-05-23T17:03:32.252+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=ccc20f04-2017-43bc-946c-f72ac157c659, expireMs=1716483842252] policy-pap | [2024-05-23T17:03:32.252+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486 PdpUpdate starting enqueue policy-pap | [2024-05-23T17:03:32.253+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-20edb6ef-4a9b-4653-b0c7-fe469441e743","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"ccc20f04-2017-43bc-946c-f72ac157c659","timestampMs":1716483812179,"name":"apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-05-23T17:03:32.254+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486 PdpUpdate started policy-pap | [2024-05-23T17:03:32.259+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-20edb6ef-4a9b-4653-b0c7-fe469441e743","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"6ceeed4d-5f92-4b6a-afe6-20c1a43493ef","timestampMs":1716483812033,"name":"apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-05-23T17:03:32.259+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE policy-pap | [2024-05-23T17:03:32.266+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"6ceeed4d-5f92-4b6a-afe6-20c1a43493ef","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"458015b7-8401-449c-9eb1-b0d2021374d4","timestampMs":1716483812177,"name":"apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-05-23T17:03:32.267+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 6ceeed4d-5f92-4b6a-afe6-20c1a43493ef policy-pap | [2024-05-23T17:03:32.269+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-20edb6ef-4a9b-4653-b0c7-fe469441e743","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"ccc20f04-2017-43bc-946c-f72ac157c659","timestampMs":1716483812179,"name":"apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-05-23T17:03:32.271+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2024-05-23T17:03:32.276+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-20edb6ef-4a9b-4653-b0c7-fe469441e743","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"ccc20f04-2017-43bc-946c-f72ac157c659","timestampMs":1716483812179,"name":"apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-05-23T17:03:32.276+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2024-05-23T17:03:32.284+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"ccc20f04-2017-43bc-946c-f72ac157c659","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"58079e69-5dbf-42c4-9690-1ac5ee585864","timestampMs":1716483812268,"name":"apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-05-23T17:03:32.285+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486 PdpUpdate stopping policy-pap | [2024-05-23T17:03:32.285+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486 PdpUpdate stopping enqueue policy-pap | [2024-05-23T17:03:32.285+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"ccc20f04-2017-43bc-946c-f72ac157c659","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"58079e69-5dbf-42c4-9690-1ac5ee585864","timestampMs":1716483812268,"name":"apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-05-23T17:03:32.286+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id ccc20f04-2017-43bc-946c-f72ac157c659 policy-pap | [2024-05-23T17:03:32.285+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486 PdpUpdate stopping timer policy-pap | [2024-05-23T17:03:32.287+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=ccc20f04-2017-43bc-946c-f72ac157c659, expireMs=1716483842252] policy-pap | [2024-05-23T17:03:32.287+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486 PdpUpdate stopping listener policy-pap | [2024-05-23T17:03:32.287+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486 PdpUpdate stopped policy-pap | [2024-05-23T17:03:32.290+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486 PdpUpdate successful policy-pap | [2024-05-23T17:03:32.291+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486 has no more requests policy-pap | [2024-05-23T17:03:41.592+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-4] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-pap | [2024-05-23T17:03:41.592+00:00|INFO|DispatcherServlet|http-nio-6969-exec-4] Initializing Servlet 'dispatcherServlet' policy-pap | [2024-05-23T17:03:41.595+00:00|INFO|DispatcherServlet|http-nio-6969-exec-4] Completed initialization in 3 ms policy-pap | [2024-05-23T17:04:02.056+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=4e121eb0-9c1d-4ab7-b0e3-12bd950eddc2, expireMs=1716483842055] policy-pap | [2024-05-23T17:04:02.152+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=6ceeed4d-5f92-4b6a-afe6-20c1a43493ef, expireMs=1716483842152] policy-pap | [2024-05-23T17:04:13.500+00:00|WARN|NonInjectionManager|pool-2-thread-1] Falling back to injection-less client. policy-pap | [2024-05-23T17:04:13.569+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls policy-pap | [2024-05-23T17:04:13.584+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls policy-pap | [2024-05-23T17:04:13.586+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls policy-pap | [2024-05-23T17:04:14.029+00:00|INFO|SessionData|http-nio-6969-exec-7] unknown group testGroup policy-pap | [2024-05-23T17:04:14.719+00:00|INFO|SessionData|http-nio-6969-exec-7] create cached group testGroup policy-pap | [2024-05-23T17:04:14.720+00:00|INFO|SessionData|http-nio-6969-exec-7] creating DB group testGroup policy-pap | [2024-05-23T17:04:15.327+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup policy-pap | [2024-05-23T17:04:15.613+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] Registering a deploy for policy onap.restart.tca 1.0.0 policy-pap | [2024-05-23T17:04:15.717+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] Registering a deploy for policy operational.apex.decisionMaker 1.0.0 policy-pap | [2024-05-23T17:04:15.718+00:00|INFO|SessionData|http-nio-6969-exec-1] update cached group testGroup policy-pap | [2024-05-23T17:04:15.719+00:00|INFO|SessionData|http-nio-6969-exec-1] updating DB group testGroup policy-pap | [2024-05-23T17:04:15.773+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2024-05-23T17:04:15Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=DEPLOYMENT, timestamp=2024-05-23T17:04:15Z, user=policyadmin)] policy-pap | [2024-05-23T17:04:16.671+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group testGroup policy-pap | [2024-05-23T17:04:16.672+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-5] remove policy onap.restart.tca 1.0.0 from subgroup testGroup pdpTypeA count=0 policy-pap | [2024-05-23T17:04:16.672+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] Registering an undeploy for policy onap.restart.tca 1.0.0 policy-pap | [2024-05-23T17:04:16.673+00:00|INFO|SessionData|http-nio-6969-exec-5] update cached group testGroup policy-pap | [2024-05-23T17:04:16.673+00:00|INFO|SessionData|http-nio-6969-exec-5] updating DB group testGroup policy-pap | [2024-05-23T17:04:16.830+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2024-05-23T17:04:16Z, user=policyadmin)] policy-pap | [2024-05-23T17:04:17.235+00:00|INFO|SessionData|http-nio-6969-exec-6] cache group defaultGroup policy-pap | [2024-05-23T17:04:17.235+00:00|INFO|SessionData|http-nio-6969-exec-6] cache group testGroup policy-pap | [2024-05-23T17:04:17.235+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-6] remove policy operational.apex.decisionMaker 1.0.0 from subgroup testGroup pdpTypeC count=0 policy-pap | [2024-05-23T17:04:17.235+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] Registering an undeploy for policy operational.apex.decisionMaker 1.0.0 policy-pap | [2024-05-23T17:04:17.236+00:00|INFO|SessionData|http-nio-6969-exec-6] update cached group testGroup policy-pap | [2024-05-23T17:04:17.236+00:00|INFO|SessionData|http-nio-6969-exec-6] updating DB group testGroup policy-pap | [2024-05-23T17:04:17.298+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=UNDEPLOYMENT, timestamp=2024-05-23T17:04:17Z, user=policyadmin)] policy-pap | [2024-05-23T17:04:37.926+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup policy-pap | [2024-05-23T17:04:37.929+00:00|INFO|SessionData|http-nio-6969-exec-1] deleting DB group testGroup policy-pap | [2024-05-23T17:05:09.544+00:00|INFO|PdpModifyRequestMap|pool-3-thread-1] check for PDP records older than 360000ms policy-pap | [2024-05-23T17:05:32.145+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","policies":[],"messageName":"PDP_STATUS","requestId":"4429337c-7b05-4d7d-89e8-045f147fc803","timestampMs":1716483932125,"name":"apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-05-23T17:05:32.146+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","policies":[],"messageName":"PDP_STATUS","requestId":"4429337c-7b05-4d7d-89e8-045f147fc803","timestampMs":1716483932125,"name":"apex-fcbee86d-d5cb-4ec7-aca0-b6a2da99e486","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-05-23T17:05:32.147+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus =================================== ======== Logs from prometheus ======== prometheus | ts=2024-05-23T17:02:30.249Z caller=main.go:573 level=info msg="No time or size retention was set so using the default time retention" duration=15d prometheus | ts=2024-05-23T17:02:30.249Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.52.0, branch=HEAD, revision=879d80922a227c37df502e7315fad8ceb10a986d)" prometheus | ts=2024-05-23T17:02:30.249Z caller=main.go:622 level=info build_context="(go=go1.22.3, platform=linux/amd64, user=root@1b4f4c206e41, date=20240508-21:56:43, tags=netgo,builtinassets,stringlabels)" prometheus | ts=2024-05-23T17:02:30.249Z caller=main.go:623 level=info host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" prometheus | ts=2024-05-23T17:02:30.249Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)" prometheus | ts=2024-05-23T17:02:30.250Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)" prometheus | ts=2024-05-23T17:02:30.255Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090 prometheus | ts=2024-05-23T17:02:30.259Z caller=main.go:1129 level=info msg="Starting TSDB ..." prometheus | ts=2024-05-23T17:02:30.262Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9090 prometheus | ts=2024-05-23T17:02:30.262Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9090 prometheus | ts=2024-05-23T17:02:30.267Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" prometheus | ts=2024-05-23T17:02:30.267Z caller=head.go:703 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=2.37µs prometheus | ts=2024-05-23T17:02:30.267Z caller=head.go:711 level=info component=tsdb msg="Replaying WAL, this may take a while" prometheus | ts=2024-05-23T17:02:30.268Z caller=head.go:783 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 prometheus | ts=2024-05-23T17:02:30.268Z caller=head.go:820 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=41.21µs wal_replay_duration=636.476µs wbl_replay_duration=190ns chunk_snapshot_load_duration=0s mmap_chunk_replay_duration=2.37µs total_replay_duration=721.386µs prometheus | ts=2024-05-23T17:02:30.270Z caller=main.go:1150 level=info fs_type=EXT4_SUPER_MAGIC prometheus | ts=2024-05-23T17:02:30.270Z caller=main.go:1153 level=info msg="TSDB started" prometheus | ts=2024-05-23T17:02:30.270Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml prometheus | ts=2024-05-23T17:02:30.271Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=700.926µs db_storage=1.33µs remote_storage=1.67µs web_handler=820ns query_engine=1µs scrape=164.142µs scrape_sd=103.421µs notify=25.12µs notify_sd=8.49µs rules=2.35µs tracing=3.98µs prometheus | ts=2024-05-23T17:02:30.271Z caller=main.go:1114 level=info msg="Server is ready to receive web requests." prometheus | ts=2024-05-23T17:02:30.271Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..." =================================== ======== Logs from simulator ======== simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json simulator | overriding logback.xml simulator | 2024-05-23 17:02:28,742 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json simulator | 2024-05-23 17:02:28,824 INFO org.onap.policy.models.simulators starting simulator | 2024-05-23 17:02:28,825 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties simulator | 2024-05-23 17:02:29,052 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION simulator | 2024-05-23 17:02:29,053 INFO org.onap.policy.models.simulators starting A&AI simulator simulator | 2024-05-23 17:02:29,213 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START simulator | 2024-05-23 17:02:29,228 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-05-23 17:02:29,235 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-05-23 17:02:29,243 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 simulator | 2024-05-23 17:02:29,325 INFO Session workerName=node0 simulator | 2024-05-23 17:02:29,977 INFO Using GSON for REST calls simulator | 2024-05-23 17:02:30,056 INFO Started o.e.j.s.ServletContextHandler@3f6db3fb{/,null,AVAILABLE} simulator | 2024-05-23 17:02:30,063 INFO Started A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} simulator | 2024-05-23 17:02:30,072 INFO Started Server@64a8c844{STARTING}[11.0.20,sto=0] @1999ms simulator | 2024-05-23 17:02:30,074 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,AVAILABLE}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4158 ms. simulator | 2024-05-23 17:02:30,084 INFO org.onap.policy.models.simulators starting SDNC simulator simulator | 2024-05-23 17:02:30,089 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START simulator | 2024-05-23 17:02:30,089 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-05-23 17:02:30,090 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-05-23 17:02:30,091 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 simulator | 2024-05-23 17:02:30,101 INFO Session workerName=node0 simulator | 2024-05-23 17:02:30,156 INFO Using GSON for REST calls simulator | 2024-05-23 17:02:30,166 INFO Started o.e.j.s.ServletContextHandler@b70da4c{/,null,AVAILABLE} simulator | 2024-05-23 17:02:30,168 INFO Started SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} simulator | 2024-05-23 17:02:30,168 INFO Started Server@70efb718{STARTING}[11.0.20,sto=0] @2095ms simulator | 2024-05-23 17:02:30,168 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,AVAILABLE}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4922 ms. simulator | 2024-05-23 17:02:30,204 INFO org.onap.policy.models.simulators starting SO simulator simulator | 2024-05-23 17:02:30,207 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START simulator | 2024-05-23 17:02:30,207 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-05-23 17:02:30,210 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-05-23 17:02:30,211 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 simulator | 2024-05-23 17:02:30,216 INFO Session workerName=node0 simulator | 2024-05-23 17:02:30,304 INFO Using GSON for REST calls simulator | 2024-05-23 17:02:30,319 INFO Started o.e.j.s.ServletContextHandler@696f0212{/,null,AVAILABLE} simulator | 2024-05-23 17:02:30,321 INFO Started SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} simulator | 2024-05-23 17:02:30,321 INFO Started Server@b7838a9{STARTING}[11.0.20,sto=0] @2248ms simulator | 2024-05-23 17:02:30,321 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,AVAILABLE}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4888 ms. simulator | 2024-05-23 17:02:30,323 INFO org.onap.policy.models.simulators starting VFC simulator simulator | 2024-05-23 17:02:30,327 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START simulator | 2024-05-23 17:02:30,327 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-05-23 17:02:30,328 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-05-23 17:02:30,328 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 simulator | 2024-05-23 17:02:30,331 INFO Session workerName=node0 simulator | 2024-05-23 17:02:30,377 INFO Using GSON for REST calls simulator | 2024-05-23 17:02:30,386 INFO Started o.e.j.s.ServletContextHandler@19553973{/,null,AVAILABLE} simulator | 2024-05-23 17:02:30,387 INFO Started VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670} simulator | 2024-05-23 17:02:30,387 INFO Started Server@f478a81{STARTING}[11.0.20,sto=0] @2314ms simulator | 2024-05-23 17:02:30,387 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,AVAILABLE}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4941 ms. simulator | 2024-05-23 17:02:30,389 INFO org.onap.policy.models.simulators started =================================== ======== Logs from zookeeper ======== zookeeper | ===> User zookeeper | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) zookeeper | ===> Configuring ... zookeeper | ===> Running preflight checks ... zookeeper | ===> Check if /var/lib/zookeeper/data is writable ... zookeeper | ===> Check if /var/lib/zookeeper/log is writable ... zookeeper | ===> Launching ... zookeeper | ===> Launching zookeeper ... zookeeper | [2024-05-23 17:02:33,121] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-05-23 17:02:33,131] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-05-23 17:02:33,131] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-05-23 17:02:33,131] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-05-23 17:02:33,131] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-05-23 17:02:33,133] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2024-05-23 17:02:33,134] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2024-05-23 17:02:33,134] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2024-05-23 17:02:33,134] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) zookeeper | [2024-05-23 17:02:33,135] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) zookeeper | [2024-05-23 17:02:33,136] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-05-23 17:02:33,136] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-05-23 17:02:33,136] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-05-23 17:02:33,137] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-05-23 17:02:33,137] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-05-23 17:02:33,137] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) zookeeper | [2024-05-23 17:02:33,155] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@77eca502 (org.apache.zookeeper.server.ServerMetrics) zookeeper | [2024-05-23 17:02:33,158] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper | [2024-05-23 17:02:33,158] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper | [2024-05-23 17:02:33,160] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2024-05-23 17:02:33,170] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-05-23 17:02:33,170] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-05-23 17:02:33,170] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-05-23 17:02:33,170] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-05-23 17:02:33,170] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-05-23 17:02:33,170] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-05-23 17:02:33,170] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-05-23 17:02:33,170] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-05-23 17:02:33,170] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-05-23 17:02:33,170] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-05-23 17:02:33,172] INFO Server environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-05-23 17:02:33,172] INFO Server environment:host.name=zookeeper (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-05-23 17:02:33,172] INFO Server environment:java.version=11.0.22 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-05-23 17:02:33,172] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-05-23 17:02:33,172] INFO Server environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-05-23 17:02:33,172] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-json-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/trogdor-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-05-23 17:02:33,172] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-05-23 17:02:33,172] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-05-23 17:02:33,172] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-05-23 17:02:33,172] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-05-23 17:02:33,172] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-05-23 17:02:33,172] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-05-23 17:02:33,172] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-05-23 17:02:33,172] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-05-23 17:02:33,172] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-05-23 17:02:33,172] INFO Server environment:os.memory.free=491MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-05-23 17:02:33,172] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-05-23 17:02:33,172] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-05-23 17:02:33,172] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-05-23 17:02:33,172] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-05-23 17:02:33,172] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-05-23 17:02:33,172] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-05-23 17:02:33,172] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-05-23 17:02:33,173] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-05-23 17:02:33,173] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-05-23 17:02:33,174] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) zookeeper | [2024-05-23 17:02:33,175] INFO minSessionTimeout set to 6000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-05-23 17:02:33,175] INFO maxSessionTimeout set to 60000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-05-23 17:02:33,176] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper | [2024-05-23 17:02:33,176] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper | [2024-05-23 17:02:33,177] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2024-05-23 17:02:33,177] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2024-05-23 17:02:33,177] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2024-05-23 17:02:33,177] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2024-05-23 17:02:33,177] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2024-05-23 17:02:33,177] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2024-05-23 17:02:33,180] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-05-23 17:02:33,180] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-05-23 17:02:33,181] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) zookeeper | [2024-05-23 17:02:33,181] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) zookeeper | [2024-05-23 17:02:33,181] INFO Created server with tickTime 3000 ms minSessionTimeout 6000 ms maxSessionTimeout 60000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-05-23 17:02:33,202] INFO Logging initialized @619ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) zookeeper | [2024-05-23 17:02:33,339] WARN o.e.j.s.ServletContextHandler@6d5620ce{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2024-05-23 17:02:33,339] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2024-05-23 17:02:33,360] INFO jetty-9.4.54.v20240208; built: 2024-02-08T19:42:39.027Z; git: cef3fbd6d736a21e7d541a5db490381d95a2047d; jvm 11.0.22+7-LTS (org.eclipse.jetty.server.Server) zookeeper | [2024-05-23 17:02:33,395] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) zookeeper | [2024-05-23 17:02:33,395] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) zookeeper | [2024-05-23 17:02:33,396] INFO node0 Scavenging every 600000ms (org.eclipse.jetty.server.session) zookeeper | [2024-05-23 17:02:33,399] WARN ServletContext@o.e.j.s.ServletContextHandler@6d5620ce{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) zookeeper | [2024-05-23 17:02:33,410] INFO Started o.e.j.s.ServletContextHandler@6d5620ce{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2024-05-23 17:02:33,427] INFO Started ServerConnector@4d1bf319{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) zookeeper | [2024-05-23 17:02:33,427] INFO Started @844ms (org.eclipse.jetty.server.Server) zookeeper | [2024-05-23 17:02:33,427] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) zookeeper | [2024-05-23 17:02:33,437] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper | [2024-05-23 17:02:33,438] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper | [2024-05-23 17:02:33,440] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper | [2024-05-23 17:02:33,442] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper | [2024-05-23 17:02:33,462] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper | [2024-05-23 17:02:33,462] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper | [2024-05-23 17:02:33,463] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2024-05-23 17:02:33,464] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2024-05-23 17:02:33,469] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) zookeeper | [2024-05-23 17:02:33,469] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2024-05-23 17:02:33,472] INFO Snapshot loaded in 8 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2024-05-23 17:02:33,473] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2024-05-23 17:02:33,473] INFO Snapshot taken in 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-05-23 17:02:33,485] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) zookeeper | [2024-05-23 17:02:33,486] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) zookeeper | [2024-05-23 17:02:33,501] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) zookeeper | [2024-05-23 17:02:33,502] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) zookeeper | [2024-05-23 17:02:36,902] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) =================================== Tearing down containers... Container policy-csit Stopping Container grafana Stopping Container policy-apex-pdp Stopping Container policy-csit Stopped Container policy-csit Removing Container policy-csit Removed Container grafana Stopped Container grafana Removing Container grafana Removed Container prometheus Stopping Container prometheus Stopped Container prometheus Removing Container prometheus Removed Container policy-apex-pdp Stopped Container policy-apex-pdp Removing Container policy-apex-pdp Removed Container simulator Stopping Container policy-pap Stopping Container simulator Stopped Container simulator Removing Container simulator Removed Container policy-pap Stopped Container policy-pap Removing Container policy-pap Removed Container policy-api Stopping Container kafka Stopping Container kafka Stopped Container kafka Removing Container kafka Removed Container zookeeper Stopping Container zookeeper Stopped Container zookeeper Removing Container zookeeper Removed Container policy-api Stopped Container policy-api Removing Container policy-api Removed Container policy-db-migrator Stopping Container policy-db-migrator Stopped Container policy-db-migrator Removing Container policy-db-migrator Removed Container mariadb Stopping Container mariadb Stopped Container mariadb Removing Container mariadb Removed Network compose_default Removing Network compose_default Removed $ ssh-agent -k unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 2132 killed; [ssh-agent] Stopped. Robot results publisher started... INFO: Checking test criticality is deprecated and will be dropped in a future release! -Parsing output xml: Done! -Copying log files to build dir: Done! -Assigning results to build: Done! -Checking thresholds: Done! Done publishing Robot results. [PostBuildScript] - [INFO] Executing post build scripts. [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins3294782550435124351.sh ---> sysstat.sh [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins11393817962550315724.sh ---> package-listing.sh ++ tr '[:upper:]' '[:lower:]' ++ facter osfamily + OS_FAMILY=debian + workspace=/w/workspace/policy-pap-newdelhi-project-csit-pap + START_PACKAGES=/tmp/packages_start.txt + END_PACKAGES=/tmp/packages_end.txt + DIFF_PACKAGES=/tmp/packages_diff.txt + PACKAGES=/tmp/packages_start.txt + '[' /w/workspace/policy-pap-newdelhi-project-csit-pap ']' + PACKAGES=/tmp/packages_end.txt + case "${OS_FAMILY}" in + dpkg -l + grep '^ii' + '[' -f /tmp/packages_start.txt ']' + '[' -f /tmp/packages_end.txt ']' + diff /tmp/packages_start.txt /tmp/packages_end.txt + '[' /w/workspace/policy-pap-newdelhi-project-csit-pap ']' + mkdir -p /w/workspace/policy-pap-newdelhi-project-csit-pap/archives/ + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-pap-newdelhi-project-csit-pap/archives/ [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins6727770656898724847.sh ---> capture-instance-metadata.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-newdelhi-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-PZ3T from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-PZ3T/bin to PATH INFO: Running in OpenStack, capturing instance metadata [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins15177960738702402561.sh provisioning config files... copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-pap-newdelhi-project-csit-pap@tmp/config15792074784252415576tmp Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] Run condition [Regular expression match] preventing perform for step [Provide Configuration files] [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SERVER_ID=logs [EnvInject] - Variables injected successfully. [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins527655043800798815.sh ---> create-netrc.sh [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins1920003545842111016.sh ---> python-tools-install.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-newdelhi-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-PZ3T from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-PZ3T/bin to PATH [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins16198174401013577813.sh ---> sudo-logs.sh Archiving 'sudo' log.. [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins2125140168267649667.sh ---> job-cost.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-newdelhi-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-PZ3T from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 lf-activate-venv(): INFO: Adding /tmp/venv-PZ3T/bin to PATH INFO: No Stack... INFO: Retrieving Pricing Info for: v3-standard-8 INFO: Archiving Costs [policy-pap-newdelhi-project-csit-pap] $ /bin/bash -l /tmp/jenkins1214292063370149069.sh ---> logs-deploy.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-newdelhi-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-PZ3T from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-PZ3T/bin to PATH INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-pap-newdelhi-project-csit-pap/5 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt Archives upload complete. INFO: archiving logs to Nexus ---> uname -a: Linux prd-ubuntu1804-docker-8c-8g-3642 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux ---> lscpu: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 8 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC-Rome Processor Stepping: 0 CPU MHz: 2800.000 BogoMIPS: 5600.00 Virtualization: AMD-V Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-7 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities ---> nproc: 8 ---> df -h: Filesystem Size Used Avail Use% Mounted on udev 16G 0 16G 0% /dev tmpfs 3.2G 708K 3.2G 1% /run /dev/vda1 155G 14G 142G 9% / tmpfs 16G 0 16G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/vda15 105M 4.4M 100M 5% /boot/efi tmpfs 3.2G 0 3.2G 0% /run/user/1001 ---> free -m: total used free shared buff/cache available Mem: 32167 880 25179 0 6107 30831 Swap: 1023 0 1023 ---> ip addr: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 link/ether fa:16:3e:08:55:4b brd ff:ff:ff:ff:ff:ff inet 10.30.107.124/23 brd 10.30.107.255 scope global dynamic ens3 valid_lft 85943sec preferred_lft 85943sec inet6 fe80::f816:3eff:fe08:554b/64 scope link valid_lft forever preferred_lft forever 3: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:25:9a:37:86 brd ff:ff:ff:ff:ff:ff inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:25ff:fe9a:3786/64 scope link valid_lft forever preferred_lft forever ---> sar -b -r -n DEV: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-3642) 05/23/24 _x86_64_ (8 CPU) 16:59:27 LINUX RESTART (8 CPU) 17:00:02 tps rtps wtps bread/s bwrtn/s 17:01:01 292.24 43.68 248.56 1966.52 42237.21 17:02:01 244.08 19.36 224.71 2322.81 96089.05 17:03:01 412.23 12.20 400.03 772.14 111616.46 17:04:01 154.42 0.45 153.97 42.79 49831.24 17:05:01 13.43 0.02 13.41 0.13 11517.68 17:06:01 17.26 0.08 17.18 11.46 337.86 17:07:01 66.89 2.07 64.82 107.45 2505.35 Average: 171.22 11.05 160.17 743.29 44881.85 17:00:02 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 17:01:01 30167220 31713752 2772000 8.42 67404 1791816 1418800 4.17 862868 1628512 136256 17:02:01 26264416 31640772 6674804 20.26 124156 5412760 1736596 5.11 1019484 5165592 1936288 17:03:01 24232900 29839520 8706320 26.43 141140 5604964 8489644 24.98 3000412 5141352 396 17:04:01 23442704 29373620 9496516 28.83 171404 5858580 9248236 27.21 3556688 5321880 1188 17:05:01 23487088 29419060 9452132 28.70 171568 5859324 9225988 27.15 3513616 5321536 176 17:06:01 23804188 29730264 9135032 27.73 171872 5858888 7551660 22.22 3221808 5313484 288 17:07:01 25783212 31570732 7156008 21.72 174132 5729516 1632192 4.80 1417508 5189492 2548 Average: 25311675 30469674 7627545 23.16 145954 5159407 5614731 16.52 2370341 4725978 296734 17:00:02 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 17:01:01 ens3 240.43 171.40 1177.28 60.00 0.00 0.00 0.00 0.00 17:01:01 lo 1.36 1.36 0.15 0.15 0.00 0.00 0.00 0.00 17:01:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:02:01 ens3 1079.29 490.62 30771.56 52.49 0.00 0.00 0.00 0.00 17:02:01 lo 13.73 13.73 1.33 1.33 0.00 0.00 0.00 0.00 17:02:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:03:01 br-06929aadd0cf 0.72 0.62 0.05 0.30 0.00 0.00 0.00 0.00 17:03:01 veth8449edb 0.75 0.95 0.05 0.05 0.00 0.00 0.00 0.00 17:03:01 veth62fc5d6 0.40 0.65 0.05 0.30 0.00 0.00 0.00 0.00 17:03:01 veth5e5514a 24.13 22.33 10.52 16.08 0.00 0.00 0.00 0.00 17:04:01 br-06929aadd0cf 0.27 0.27 0.02 0.02 0.00 0.00 0.00 0.00 17:04:01 veth8449edb 3.78 4.87 0.75 0.49 0.00 0.00 0.00 0.00 17:04:01 veth62fc5d6 0.15 0.22 0.01 0.01 0.00 0.00 0.00 0.00 17:04:01 veth5e5514a 21.65 17.48 6.62 23.76 0.00 0.00 0.00 0.00 17:05:01 br-06929aadd0cf 0.15 0.05 0.01 0.00 0.00 0.00 0.00 0.00 17:05:01 veth8449edb 3.20 4.67 0.66 0.36 0.00 0.00 0.00 0.00 17:05:01 veth62fc5d6 0.15 0.07 0.01 0.00 0.00 0.00 0.00 0.00 17:05:01 veth5e5514a 0.50 0.50 0.63 0.08 0.00 0.00 0.00 0.00 17:06:01 br-06929aadd0cf 0.02 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:06:01 veth5e5514a 0.33 0.33 0.58 0.03 0.00 0.00 0.00 0.00 17:06:01 vetha5ca42c 65.22 65.91 18.66 41.15 0.00 0.00 0.00 0.00 17:06:01 ens3 1512.45 820.51 33534.79 152.84 0.00 0.00 0.00 0.00 17:07:01 ens3 47.39 39.03 71.32 18.48 0.00 0.00 0.00 0.00 17:07:01 lo 27.13 27.13 2.50 2.50 0.00 0.00 0.00 0.00 17:07:01 docker0 12.86 19.11 2.09 287.83 0.00 0.00 0.00 0.00 Average: ens3 205.64 108.01 4752.90 21.70 0.00 0.00 0.00 0.00 Average: lo 3.33 3.33 0.31 0.31 0.00 0.00 0.00 0.00 Average: docker0 1.84 2.74 0.30 41.21 0.00 0.00 0.00 0.00 ---> sar -P ALL: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-3642) 05/23/24 _x86_64_ (8 CPU) 16:59:27 LINUX RESTART (8 CPU) 17:00:02 CPU %user %nice %system %iowait %steal %idle 17:01:01 all 10.66 0.00 1.08 3.58 0.04 84.64 17:01:01 0 22.21 0.00 1.75 0.48 0.05 75.51 17:01:01 1 12.35 0.00 1.14 0.41 0.03 86.07 17:01:01 2 3.16 0.00 0.48 5.26 0.09 91.02 17:01:01 3 4.39 0.00 1.38 3.94 0.02 90.28 17:01:01 4 3.60 0.00 0.90 4.87 0.02 90.61 17:01:01 5 1.58 0.00 0.42 11.09 0.02 86.89 17:01:01 6 6.01 0.00 0.53 0.27 0.02 93.18 17:01:01 7 32.00 0.00 2.06 2.28 0.05 63.61 17:02:01 all 16.83 0.00 10.14 7.47 0.14 65.42 17:02:01 0 17.56 0.00 9.02 2.66 0.14 70.63 17:02:01 1 17.48 0.00 11.84 5.55 0.15 64.98 17:02:01 2 13.02 0.00 9.97 3.24 0.17 73.60 17:02:01 3 13.64 0.00 9.74 4.97 0.14 71.51 17:02:01 4 14.76 0.00 9.85 23.77 0.14 51.49 17:02:01 5 11.82 0.00 10.31 15.42 0.15 62.29 17:02:01 6 13.29 0.00 9.50 1.62 0.12 75.47 17:02:01 7 33.08 0.00 10.91 2.58 0.15 53.28 17:03:01 all 20.01 0.00 3.88 9.96 0.08 66.06 17:03:01 0 22.19 0.00 4.84 2.89 0.08 69.99 17:03:01 1 17.74 0.00 3.14 8.06 0.08 70.98 17:03:01 2 24.34 0.00 4.13 0.67 0.08 70.78 17:03:01 3 18.98 0.00 3.96 7.31 0.08 69.66 17:03:01 4 19.36 0.00 4.32 35.14 0.08 41.10 17:03:01 5 19.31 0.00 3.20 15.94 0.07 61.48 17:03:01 6 18.03 0.00 3.36 2.44 0.10 76.07 17:03:01 7 20.17 0.00 4.10 7.40 0.08 68.24 17:04:01 all 18.15 0.00 3.42 3.90 0.08 74.45 17:04:01 0 17.25 0.00 3.55 3.50 0.08 75.62 17:04:01 1 19.03 0.00 4.13 8.74 0.08 68.02 17:04:01 2 19.95 0.00 3.55 2.86 0.07 73.57 17:04:01 3 18.52 0.00 3.39 1.76 0.08 76.24 17:04:01 4 14.03 0.00 3.01 0.25 0.08 82.63 17:04:01 5 19.81 0.00 2.57 8.41 0.08 69.13 17:04:01 6 17.68 0.00 3.07 4.21 0.07 74.97 17:04:01 7 18.93 0.00 4.09 1.51 0.08 75.38 17:05:01 all 4.33 0.00 0.46 0.85 0.03 94.33 17:05:01 0 3.46 0.00 0.40 0.00 0.02 96.13 17:05:01 1 3.55 0.00 0.42 5.86 0.03 90.14 17:05:01 2 4.34 0.00 0.47 0.63 0.03 94.52 17:05:01 3 3.38 0.00 0.42 0.00 0.03 96.17 17:05:01 4 6.07 0.00 0.50 0.02 0.03 93.38 17:05:01 5 3.64 0.00 0.53 0.00 0.05 95.77 17:05:01 6 6.14 0.00 0.60 0.00 0.03 93.23 17:05:01 7 4.02 0.00 0.37 0.25 0.03 95.32 17:06:01 all 2.47 0.00 0.61 0.10 0.04 96.79 17:06:01 0 2.47 0.00 0.68 0.07 0.03 96.74 17:06:01 1 2.51 0.00 0.67 0.07 0.05 96.71 17:06:01 2 2.04 0.00 0.52 0.28 0.03 97.13 17:06:01 3 2.25 0.00 0.53 0.03 0.03 97.14 17:06:01 4 2.47 0.00 0.60 0.19 0.03 96.72 17:06:01 5 3.34 0.00 0.69 0.13 0.05 95.78 17:06:01 6 2.00 0.00 0.57 0.00 0.03 97.40 17:06:01 7 2.71 0.00 0.62 0.03 0.05 96.59 17:07:01 all 8.47 0.00 0.73 0.34 0.03 90.43 17:07:01 0 1.39 0.00 0.53 0.02 0.03 98.03 17:07:01 1 1.00 0.00 0.60 0.07 0.02 98.31 17:07:01 2 1.22 0.00 0.38 0.02 0.02 98.37 17:07:01 3 25.68 0.00 0.97 0.22 0.05 73.08 17:07:01 4 16.33 0.00 1.05 0.15 0.03 82.43 17:07:01 5 5.50 0.00 0.95 1.99 0.03 91.53 17:07:01 6 15.48 0.00 0.80 0.17 0.02 83.54 17:07:01 7 1.12 0.00 0.57 0.13 0.03 98.14 Average: all 11.54 0.00 2.89 3.73 0.06 81.77 Average: 0 12.31 0.00 2.96 1.37 0.06 83.30 Average: 1 10.49 0.00 3.12 4.11 0.06 82.22 Average: 2 9.71 0.00 2.78 1.84 0.07 85.60 Average: 3 12.42 0.00 2.90 2.59 0.06 82.02 Average: 4 10.96 0.00 2.88 9.16 0.06 76.94 Average: 5 9.28 0.00 2.66 7.54 0.06 80.45 Average: 6 11.23 0.00 2.63 1.24 0.06 84.85 Average: 7 15.92 0.00 3.23 2.02 0.07 78.76