Started by timer Running as SYSTEM [EnvInject] - Loading node environment variables. Building remotely on prd-ubuntu1804-docker-8c-8g-28545 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-pap-newdelhi-project-csit-pap [ssh-agent] Looking for ssh-agent implementation... [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) $ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-F8OaJwSaGhl9/agent.2052 SSH_AGENT_PID=2054 [ssh-agent] Started. Running ssh-add (command line suppressed) Identity added: /w/workspace/policy-pap-newdelhi-project-csit-pap@tmp/private_key_9045856659038745608.key (/w/workspace/policy-pap-newdelhi-project-csit-pap@tmp/private_key_9045856659038745608.key) [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) The recommended git tool is: NONE using credential onap-jenkins-ssh Wiping out workspace first. Cloning the remote Git repository Cloning repository git://cloud.onap.org/mirror/policy/docker.git > git init /w/workspace/policy-pap-newdelhi-project-csit-pap # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git > git --version # timeout=10 > git --version # 'git version 2.17.1' using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 Avoid second fetch > git rev-parse refs/remotes/origin/newdelhi^{commit} # timeout=10 Checking out Revision a0de87f9d2d88fd7f870703053c99c7149d608ec (refs/remotes/origin/newdelhi) > git config core.sparsecheckout # timeout=10 > git checkout -f a0de87f9d2d88fd7f870703053c99c7149d608ec # timeout=30 Commit message: "Fix timeout in pap CSIT for auditing undeploys" > git rev-list --no-walk a0de87f9d2d88fd7f870703053c99c7149d608ec # timeout=10 provisioning config files... copy managed file [npmrc] to file:/home/jenkins/.npmrc copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins4702058324121833288.sh ---> python-tools-install.sh Setup pyenv: * system (set by /opt/pyenv/version) * 3.8.13 (set by /opt/pyenv/version) * 3.9.13 (set by /opt/pyenv/version) * 3.10.6 (set by /opt/pyenv/version) lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-4v0z lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-4v0z/bin to PATH Generating Requirements File Python 3.10.6 pip 24.2 from /tmp/venv-4v0z/lib/python3.10/site-packages/pip (python 3.10) appdirs==1.4.4 argcomplete==3.4.0 aspy.yaml==1.3.0 attrs==24.1.0 autopage==0.5.2 beautifulsoup4==4.12.3 boto3==1.34.153 botocore==1.34.153 bs4==0.0.2 cachetools==5.4.0 certifi==2024.7.4 cffi==1.16.0 cfgv==3.4.0 chardet==5.2.0 charset-normalizer==3.3.2 click==8.1.7 cliff==4.7.0 cmd2==2.4.3 cryptography==3.3.2 debtcollector==3.0.0 decorator==5.1.1 defusedxml==0.7.1 Deprecated==1.2.14 distlib==0.3.8 dnspython==2.6.1 docker==4.2.2 dogpile.cache==1.3.3 email_validator==2.2.0 filelock==3.15.4 future==1.0.0 gitdb==4.0.11 GitPython==3.1.43 google-auth==2.32.0 httplib2==0.22.0 identify==2.6.0 idna==3.7 importlib-resources==1.5.0 iso8601==2.1.0 Jinja2==3.1.4 jmespath==1.0.1 jsonpatch==1.33 jsonpointer==3.0.0 jsonschema==4.23.0 jsonschema-specifications==2023.12.1 keystoneauth1==5.7.0 kubernetes==30.1.0 lftools==0.37.10 lxml==5.2.2 MarkupSafe==2.1.5 msgpack==1.0.8 multi_key_dict==2.0.3 munch==4.0.0 netaddr==1.3.0 netifaces==0.11.0 niet==1.4.2 nodeenv==1.9.1 oauth2client==4.1.3 oauthlib==3.2.2 openstacksdk==3.3.0 os-client-config==2.1.0 os-service-types==1.7.0 osc-lib==3.1.0 oslo.config==9.5.0 oslo.context==5.5.0 oslo.i18n==6.3.0 oslo.log==6.1.1 oslo.serialization==5.4.0 oslo.utils==7.2.0 packaging==24.1 pbr==6.0.0 platformdirs==4.2.2 prettytable==3.10.2 pyasn1==0.6.0 pyasn1_modules==0.4.0 pycparser==2.22 pygerrit2==2.0.15 PyGithub==2.3.0 PyJWT==2.9.0 PyNaCl==1.5.0 pyparsing==2.4.7 pyperclip==1.9.0 pyrsistent==0.20.0 python-cinderclient==9.5.0 python-dateutil==2.9.0.post0 python-heatclient==3.5.0 python-jenkins==1.8.2 python-keystoneclient==5.4.0 python-magnumclient==4.6.0 python-novaclient==18.6.0 python-openstackclient==6.6.1 python-swiftclient==4.6.0 PyYAML==6.0.1 referencing==0.35.1 requests==2.32.3 requests-oauthlib==2.0.0 requestsexceptions==1.4.0 rfc3986==2.0.0 rpds-py==0.19.1 rsa==4.9 ruamel.yaml==0.18.6 ruamel.yaml.clib==0.2.8 s3transfer==0.10.2 simplejson==3.19.2 six==1.16.0 smmap==5.0.1 soupsieve==2.5 stevedore==5.2.0 tabulate==0.9.0 toml==0.10.2 tomlkit==0.13.0 tqdm==4.66.4 typing_extensions==4.12.2 tzdata==2024.1 urllib3==1.26.19 virtualenv==20.26.3 wcwidth==0.2.13 websocket-client==1.8.0 wrapt==1.16.0 xdg==6.0.0 xmltodict==0.13.0 yq==3.4.3 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SET_JDK_VERSION=openjdk17 GIT_URL="git://cloud.onap.org/mirror" [EnvInject] - Variables injected successfully. [policy-pap-newdelhi-project-csit-pap] $ /bin/sh /tmp/jenkins10248788141727966911.sh ---> update-java-alternatives.sh ---> Updating Java version ---> Ubuntu/Debian system detected update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode openjdk version "17.0.4" 2022-07-19 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' [EnvInject] - Variables injected successfully. [policy-pap-newdelhi-project-csit-pap] $ /bin/sh -xe /tmp/jenkins15829719732379227638.sh + /w/workspace/policy-pap-newdelhi-project-csit-pap/csit/run-project-csit.sh pap WARNING! Using --password via the CLI is insecure. Use --password-stdin. WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded docker: 'compose' is not a docker command. See 'docker --help' Docker Compose Plugin not installed. Installing now... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 49 60.0M 49 30.0M 0 0 72.6M 0 --:--:-- --:--:-- --:--:-- 72.6M 100 60.0M 100 60.0M 0 0 99.9M 0 --:--:-- --:--:-- --:--:-- 159M Setting project configuration for: pap Configuring docker compose... Starting apex-pdp application with Grafana simulator Pulling grafana Pulling policy-db-migrator Pulling apex-pdp Pulling mariadb Pulling kafka Pulling prometheus Pulling api Pulling zookeeper Pulling pap Pulling 31e352740f53 Pulling fs layer ecc4de98d537 Pulling fs layer 665dfb3388a1 Pulling fs layer f270a5fd7930 Pulling fs layer 9038eaba24f8 Pulling fs layer 04a7796b82ca Pulling fs layer f270a5fd7930 Waiting 9038eaba24f8 Waiting 04a7796b82ca Waiting 31e352740f53 Pulling fs layer ad1782e4d1ef Pulling fs layer bc8105c6553b Pulling fs layer 929241f867bb Pulling fs layer 37728a7352e6 Pulling fs layer 3f40c7aa46a6 Pulling fs layer 353af139d39e Pulling fs layer bc8105c6553b Waiting 929241f867bb Waiting 3f40c7aa46a6 Waiting 37728a7352e6 Waiting 353af139d39e Waiting 31e352740f53 Pulling fs layer ecc4de98d537 Pulling fs layer bda0b253c68f Pulling fs layer b9357b55a7a5 Pulling fs layer 4c3047628e17 Pulling fs layer 6cf350721225 Pulling fs layer bda0b253c68f Waiting 4c3047628e17 Waiting 6cf350721225 Waiting de723b4c7ed9 Pulling fs layer de723b4c7ed9 Waiting 31e352740f53 Downloading [> ] 48.06kB/3.398MB 31e352740f53 Downloading [> ] 48.06kB/3.398MB 31e352740f53 Downloading [> ] 48.06kB/3.398MB 665dfb3388a1 Downloading [==================================================>] 303B/303B 31e352740f53 Pulling fs layer ecc4de98d537 Pulling fs layer 1fe734c5fee3 Pulling fs layer c8e6f0452a8e Pulling fs layer 0143f8517101 Pulling fs layer ee69cc1a77e2 Pulling fs layer 81667b400b57 Pulling fs layer ec3b6d0cc414 Pulling fs layer a8d3998ab21c Pulling fs layer 89d6e2ec6372 Pulling fs layer 80096f8bb25e Pulling fs layer cbd359ebc87d Pulling fs layer ee69cc1a77e2 Waiting 81667b400b57 Waiting ec3b6d0cc414 Waiting a8d3998ab21c Waiting 89d6e2ec6372 Waiting 80096f8bb25e Waiting cbd359ebc87d Waiting 31e352740f53 Downloading [> ] 48.06kB/3.398MB 1fe734c5fee3 Waiting c8e6f0452a8e Waiting 0143f8517101 Waiting 665dfb3388a1 Verifying Checksum 665dfb3388a1 Download complete 31e352740f53 Pulling fs layer ecc4de98d537 Pulling fs layer 145e9fcd3938 Pulling fs layer 4be774fd73e2 Pulling fs layer 71f834c33815 Pulling fs layer a40760cd2625 Pulling fs layer 114f99593bd8 Pulling fs layer 31e352740f53 Downloading [> ] 48.06kB/3.398MB 145e9fcd3938 Waiting 4be774fd73e2 Waiting a40760cd2625 Waiting 114f99593bd8 Waiting 71f834c33815 Waiting ecc4de98d537 Downloading [> ] 539.6kB/73.93MB ecc4de98d537 Downloading [> ] 539.6kB/73.93MB ecc4de98d537 Downloading [> ] 539.6kB/73.93MB ecc4de98d537 Downloading [> ] 539.6kB/73.93MB f270a5fd7930 Downloading [> ] 539.6kB/159.1MB 10ac4908093d Pulling fs layer 44779101e748 Pulling fs layer a721db3e3f3d Pulling fs layer 1850a929b84a Pulling fs layer 397a918c7da3 Pulling fs layer 806be17e856d Pulling fs layer 10ac4908093d Waiting 634de6c90876 Pulling fs layer cd00854cfb1a Pulling fs layer 1850a929b84a Waiting a721db3e3f3d Waiting 44779101e748 Waiting cd00854cfb1a Waiting 806be17e856d Waiting 634de6c90876 Waiting 397a918c7da3 Waiting 9fa9226be034 Pulling fs layer 1617e25568b2 Pulling fs layer ec307c9fbf62 Pulling fs layer d4e715947f0e Pulling fs layer c522420720c6 Pulling fs layer 18d28937c421 Pulling fs layer 873361efd54d Pulling fs layer dd44465db85c Pulling fs layer ec307c9fbf62 Waiting d4e715947f0e Waiting 9fa9226be034 Waiting 18d28937c421 Waiting c522420720c6 Waiting 0636908550c9 Pulling fs layer cd795675b8a2 Pulling fs layer 407f3c6e3260 Pulling fs layer 67fb76c620a2 Pulling fs layer dd44465db85c Waiting cd795675b8a2 Waiting 67fb76c620a2 Waiting 407f3c6e3260 Waiting 0636908550c9 Waiting 4abcf2066143 Pulling fs layer 5c277da153ce Pulling fs layer 85ed0bf0f127 Pulling fs layer 4abcf2066143 Waiting a59a4ddf8225 Pulling fs layer 2d9ac7a96b08 Pulling fs layer 5c277da153ce Waiting c9a66980b76c Pulling fs layer 562cf3de6818 Pulling fs layer bfcc9123594e Pulling fs layer f73d5405641d Pulling fs layer 0c9bbf800250 Pulling fs layer a59a4ddf8225 Waiting 85ed0bf0f127 Waiting bfcc9123594e Waiting f73d5405641d Waiting 0c9bbf800250 Waiting 31e352740f53 Verifying Checksum 31e352740f53 Verifying Checksum 31e352740f53 Download complete 31e352740f53 Verifying Checksum 31e352740f53 Verifying Checksum 31e352740f53 Verifying Checksum 31e352740f53 Download complete 31e352740f53 Download complete 31e352740f53 Download complete 31e352740f53 Download complete 31e352740f53 Extracting [> ] 65.54kB/3.398MB 31e352740f53 Extracting [> ] 65.54kB/3.398MB 31e352740f53 Extracting [> ] 65.54kB/3.398MB 31e352740f53 Extracting [> ] 65.54kB/3.398MB 31e352740f53 Extracting [> ] 65.54kB/3.398MB 9038eaba24f8 Downloading [==================================================>] 1.153kB/1.153kB 9038eaba24f8 Verifying Checksum 9038eaba24f8 Download complete 04a7796b82ca Downloading [==================================================>] 1.127kB/1.127kB 04a7796b82ca Download complete ecc4de98d537 Downloading [=====> ] 7.568MB/73.93MB ecc4de98d537 Downloading [=====> ] 7.568MB/73.93MB ecc4de98d537 Downloading [=====> ] 7.568MB/73.93MB ecc4de98d537 Downloading [=====> ] 7.568MB/73.93MB ad1782e4d1ef Downloading [> ] 539.6kB/180.4MB f270a5fd7930 Downloading [==> ] 7.568MB/159.1MB 31e352740f53 Extracting [==================> ] 1.245MB/3.398MB 31e352740f53 Extracting [==================> ] 1.245MB/3.398MB 31e352740f53 Extracting [==================> ] 1.245MB/3.398MB 31e352740f53 Extracting [==================> ] 1.245MB/3.398MB 31e352740f53 Extracting [==================> ] 1.245MB/3.398MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB ecc4de98d537 Downloading [============> ] 18.92MB/73.93MB ecc4de98d537 Downloading [============> ] 18.92MB/73.93MB ecc4de98d537 Downloading [============> ] 18.92MB/73.93MB ecc4de98d537 Downloading [============> ] 18.92MB/73.93MB ad1782e4d1ef Downloading [==> ] 8.65MB/180.4MB f270a5fd7930 Downloading [======> ] 19.46MB/159.1MB 31e352740f53 Pull complete 31e352740f53 Pull complete 31e352740f53 Pull complete 31e352740f53 Pull complete 31e352740f53 Pull complete ad1782e4d1ef Downloading [===> ] 11.35MB/180.4MB ecc4de98d537 Downloading [================> ] 24.33MB/73.93MB ecc4de98d537 Downloading [================> ] 24.33MB/73.93MB ecc4de98d537 Downloading [================> ] 24.33MB/73.93MB ecc4de98d537 Downloading [================> ] 24.33MB/73.93MB f270a5fd7930 Downloading [=======> ] 22.71MB/159.1MB ad1782e4d1ef Downloading [======> ] 23.25MB/180.4MB f270a5fd7930 Downloading [============> ] 38.39MB/159.1MB ecc4de98d537 Downloading [===========================> ] 40.01MB/73.93MB ecc4de98d537 Downloading [===========================> ] 40.01MB/73.93MB ecc4de98d537 Downloading [===========================> ] 40.01MB/73.93MB ecc4de98d537 Downloading [===========================> ] 40.01MB/73.93MB ad1782e4d1ef Downloading [==========> ] 37.31MB/180.4MB f270a5fd7930 Downloading [================> ] 54.07MB/159.1MB ecc4de98d537 Downloading [======================================> ] 56.23MB/73.93MB ecc4de98d537 Downloading [======================================> ] 56.23MB/73.93MB ecc4de98d537 Downloading [======================================> ] 56.23MB/73.93MB ecc4de98d537 Downloading [======================================> ] 56.23MB/73.93MB ad1782e4d1ef Downloading [=============> ] 49.2MB/180.4MB f270a5fd7930 Downloading [======================> ] 70.83MB/159.1MB ecc4de98d537 Downloading [=================================================> ] 73.53MB/73.93MB ecc4de98d537 Downloading [=================================================> ] 73.53MB/73.93MB ecc4de98d537 Downloading [=================================================> ] 73.53MB/73.93MB ecc4de98d537 Downloading [=================================================> ] 73.53MB/73.93MB ecc4de98d537 Verifying Checksum ecc4de98d537 Verifying Checksum ecc4de98d537 Download complete ecc4de98d537 Download complete ecc4de98d537 Verifying Checksum ecc4de98d537 Download complete ecc4de98d537 Download complete bc8105c6553b Downloading [=> ] 3.002kB/84.13kB bc8105c6553b Downloading [==================================================>] 84.13kB/84.13kB bc8105c6553b Download complete 929241f867bb Downloading [==================================================>] 92B/92B 929241f867bb Verifying Checksum 929241f867bb Download complete 37728a7352e6 Download complete 3f40c7aa46a6 Downloading [==================================================>] 302B/302B 3f40c7aa46a6 Download complete ad1782e4d1ef Downloading [=================> ] 64.34MB/180.4MB f270a5fd7930 Downloading [===========================> ] 87.05MB/159.1MB ecc4de98d537 Extracting [> ] 557.1kB/73.93MB ecc4de98d537 Extracting [> ] 557.1kB/73.93MB ecc4de98d537 Extracting [> ] 557.1kB/73.93MB ecc4de98d537 Extracting [> ] 557.1kB/73.93MB 353af139d39e Downloading [> ] 539.6kB/246.5MB ad1782e4d1ef Downloading [======================> ] 82.18MB/180.4MB f270a5fd7930 Downloading [=================================> ] 105.4MB/159.1MB 353af139d39e Downloading [=> ] 7.028MB/246.5MB ad1782e4d1ef Downloading [========================> ] 87.05MB/180.4MB ecc4de98d537 Extracting [===> ] 4.456MB/73.93MB ecc4de98d537 Extracting [===> ] 4.456MB/73.93MB ecc4de98d537 Extracting [===> ] 4.456MB/73.93MB ecc4de98d537 Extracting [===> ] 4.456MB/73.93MB f270a5fd7930 Downloading [==================================> ] 109.2MB/159.1MB 353af139d39e Downloading [==> ] 10.27MB/246.5MB ad1782e4d1ef Downloading [============================> ] 103.8MB/180.4MB f270a5fd7930 Downloading [========================================> ] 128.1MB/159.1MB ecc4de98d537 Extracting [======> ] 9.47MB/73.93MB ecc4de98d537 Extracting [======> ] 9.47MB/73.93MB ecc4de98d537 Extracting [======> ] 9.47MB/73.93MB ecc4de98d537 Extracting [======> ] 9.47MB/73.93MB 353af139d39e Downloading [====> ] 20.54MB/246.5MB ad1782e4d1ef Downloading [================================> ] 117.9MB/180.4MB ecc4de98d537 Extracting [=========> ] 14.48MB/73.93MB ecc4de98d537 Extracting [=========> ] 14.48MB/73.93MB f270a5fd7930 Downloading [=============================================> ] 146MB/159.1MB ecc4de98d537 Extracting [=========> ] 14.48MB/73.93MB ecc4de98d537 Extracting [=========> ] 14.48MB/73.93MB 4798a7e93601 Pulling fs layer a453f30e82bf Pulling fs layer 016e383f3f47 Pulling fs layer f7d27dafad0a Pulling fs layer 56ccc8be1ca0 Pulling fs layer f77f01ac624c Pulling fs layer 1c6e35a73ed7 Pulling fs layer 4798a7e93601 Waiting aa5e151b62ff Pulling fs layer 262d375318c3 Pulling fs layer 28a7d18ebda4 Pulling fs layer bdc615dfc787 Pulling fs layer a453f30e82bf Waiting f7d27dafad0a Waiting 016e383f3f47 Waiting 262d375318c3 Waiting 28a7d18ebda4 Waiting 56ccc8be1ca0 Waiting f77f01ac624c Waiting 1c6e35a73ed7 Waiting aa5e151b62ff Waiting ab973a5038b6 Pulling fs layer bdc615dfc787 Waiting ab973a5038b6 Waiting 5aee3e0528f7 Pulling fs layer 5aee3e0528f7 Waiting 353af139d39e Downloading [=======> ] 34.6MB/246.5MB 4798a7e93601 Pulling fs layer a453f30e82bf Pulling fs layer 016e383f3f47 Pulling fs layer f7d27dafad0a Pulling fs layer 56ccc8be1ca0 Pulling fs layer 4798a7e93601 Waiting f77f01ac624c Pulling fs layer 1c6e35a73ed7 Pulling fs layer aa5e151b62ff Pulling fs layer 262d375318c3 Pulling fs layer f7d27dafad0a Waiting 28a7d18ebda4 Pulling fs layer bdc615dfc787 Pulling fs layer a453f30e82bf Waiting 33966fd36306 Pulling fs layer 016e383f3f47 Waiting 8b4455fb60b9 Pulling fs layer 56ccc8be1ca0 Waiting f77f01ac624c Waiting 1c6e35a73ed7 Waiting aa5e151b62ff Waiting 262d375318c3 Waiting 28a7d18ebda4 Waiting bdc615dfc787 Waiting 33966fd36306 Waiting 8b4455fb60b9 Waiting f270a5fd7930 Verifying Checksum f270a5fd7930 Download complete ad1782e4d1ef Downloading [=====================================> ] 133.5MB/180.4MB bda0b253c68f Downloading [==================================================>] 292B/292B bda0b253c68f Verifying Checksum bda0b253c68f Download complete ecc4de98d537 Extracting [===========> ] 17.27MB/73.93MB ecc4de98d537 Extracting [===========> ] 17.27MB/73.93MB ecc4de98d537 Extracting [===========> ] 17.27MB/73.93MB ecc4de98d537 Extracting [===========> ] 17.27MB/73.93MB 353af139d39e Downloading [=========> ] 48.66MB/246.5MB b9357b55a7a5 Downloading [=> ] 3.001kB/127kB b9357b55a7a5 Downloading [==================================================>] 127kB/127kB b9357b55a7a5 Verifying Checksum b9357b55a7a5 Download complete 4c3047628e17 Download complete ad1782e4d1ef Downloading [=========================================> ] 149.2MB/180.4MB ecc4de98d537 Extracting [================> ] 23.95MB/73.93MB ecc4de98d537 Extracting [================> ] 23.95MB/73.93MB ecc4de98d537 Extracting [================> ] 23.95MB/73.93MB ecc4de98d537 Extracting [================> ] 23.95MB/73.93MB 353af139d39e Downloading [=============> ] 67.04MB/246.5MB 6cf350721225 Downloading [> ] 539.6kB/98.32MB ad1782e4d1ef Downloading [============================================> ] 162.2MB/180.4MB ecc4de98d537 Extracting [==================> ] 27.3MB/73.93MB ecc4de98d537 Extracting [==================> ] 27.3MB/73.93MB ecc4de98d537 Extracting [==================> ] 27.3MB/73.93MB ecc4de98d537 Extracting [==================> ] 27.3MB/73.93MB 353af139d39e Downloading [================> ] 80.56MB/246.5MB 6cf350721225 Downloading [==> ] 4.324MB/98.32MB ad1782e4d1ef Downloading [================================================> ] 175.2MB/180.4MB ecc4de98d537 Extracting [======================> ] 33.98MB/73.93MB ecc4de98d537 Extracting [======================> ] 33.98MB/73.93MB ecc4de98d537 Extracting [======================> ] 33.98MB/73.93MB ecc4de98d537 Extracting [======================> ] 33.98MB/73.93MB 353af139d39e Downloading [===================> ] 97.86MB/246.5MB ad1782e4d1ef Verifying Checksum ad1782e4d1ef Download complete 6cf350721225 Downloading [=====> ] 10.81MB/98.32MB de723b4c7ed9 Downloading [==================================================>] 1.297kB/1.297kB de723b4c7ed9 Download complete 1fe734c5fee3 Downloading [> ] 343kB/32.94MB ecc4de98d537 Extracting [==========================> ] 38.99MB/73.93MB ecc4de98d537 Extracting [==========================> ] 38.99MB/73.93MB ecc4de98d537 Extracting [==========================> ] 38.99MB/73.93MB ecc4de98d537 Extracting [==========================> ] 38.99MB/73.93MB 353af139d39e Downloading [======================> ] 113MB/246.5MB ad1782e4d1ef Extracting [> ] 557.1kB/180.4MB 6cf350721225 Downloading [============> ] 24.87MB/98.32MB 1fe734c5fee3 Downloading [============> ] 7.912MB/32.94MB 353af139d39e Downloading [=========================> ] 127.1MB/246.5MB ecc4de98d537 Extracting [=============================> ] 44.01MB/73.93MB ecc4de98d537 Extracting [=============================> ] 44.01MB/73.93MB ecc4de98d537 Extracting [=============================> ] 44.01MB/73.93MB ecc4de98d537 Extracting [=============================> ] 44.01MB/73.93MB ad1782e4d1ef Extracting [=> ] 4.456MB/180.4MB 6cf350721225 Downloading [=====================> ] 41.63MB/98.32MB 1fe734c5fee3 Downloading [===========================> ] 17.89MB/32.94MB 353af139d39e Downloading [===========================> ] 136.2MB/246.5MB ecc4de98d537 Extracting [===============================> ] 46.79MB/73.93MB ecc4de98d537 Extracting [===============================> ] 46.79MB/73.93MB ecc4de98d537 Extracting [===============================> ] 46.79MB/73.93MB ecc4de98d537 Extracting [===============================> ] 46.79MB/73.93MB ad1782e4d1ef Extracting [====> ] 14.48MB/180.4MB 6cf350721225 Downloading [=============================> ] 57.31MB/98.32MB 1fe734c5fee3 Downloading [=================================================> ] 32.68MB/32.94MB 1fe734c5fee3 Verifying Checksum 1fe734c5fee3 Download complete c8e6f0452a8e Downloading [==================================================>] 1.076kB/1.076kB c8e6f0452a8e Verifying Checksum c8e6f0452a8e Download complete 353af139d39e Downloading [==============================> ] 148.7MB/246.5MB 0143f8517101 Downloading [============================> ] 3.003kB/5.324kB 0143f8517101 Downloading [==================================================>] 5.324kB/5.324kB 0143f8517101 Download complete ecc4de98d537 Extracting [==================================> ] 50.69MB/73.93MB ecc4de98d537 Extracting [==================================> ] 50.69MB/73.93MB ecc4de98d537 Extracting [==================================> ] 50.69MB/73.93MB ecc4de98d537 Extracting [==================================> ] 50.69MB/73.93MB ad1782e4d1ef Extracting [=======> ] 27.85MB/180.4MB ee69cc1a77e2 Downloading [============================> ] 3.003kB/5.312kB ee69cc1a77e2 Downloading [==================================================>] 5.312kB/5.312kB ee69cc1a77e2 Verifying Checksum ee69cc1a77e2 Download complete 6cf350721225 Downloading [====================================> ] 70.83MB/98.32MB 81667b400b57 Downloading [==================================================>] 1.034kB/1.034kB 81667b400b57 Verifying Checksum 81667b400b57 Download complete ec3b6d0cc414 Downloading [==================================================>] 1.036kB/1.036kB ec3b6d0cc414 Verifying Checksum ec3b6d0cc414 Download complete a8d3998ab21c Downloading [==========> ] 3.002kB/13.9kB a8d3998ab21c Download complete 353af139d39e Downloading [================================> ] 160.6MB/246.5MB 89d6e2ec6372 Downloading [==========> ] 3.002kB/13.79kB 89d6e2ec6372 Downloading [==================================================>] 13.79kB/13.79kB 89d6e2ec6372 Verifying Checksum 89d6e2ec6372 Download complete 80096f8bb25e Downloading [==================================================>] 2.238kB/2.238kB 80096f8bb25e Verifying Checksum 80096f8bb25e Download complete ecc4de98d537 Extracting [=====================================> ] 55.15MB/73.93MB ecc4de98d537 Extracting [=====================================> ] 55.15MB/73.93MB ecc4de98d537 Extracting [=====================================> ] 55.15MB/73.93MB ecc4de98d537 Extracting [=====================================> ] 55.15MB/73.93MB 6cf350721225 Downloading [=============================================> ] 88.67MB/98.32MB ad1782e4d1ef Extracting [==========> ] 39.55MB/180.4MB cbd359ebc87d Download complete 145e9fcd3938 Downloading [==================================================>] 294B/294B 145e9fcd3938 Verifying Checksum 145e9fcd3938 Download complete 6cf350721225 Verifying Checksum 353af139d39e Downloading [====================================> ] 178.4MB/246.5MB 71f834c33815 Download complete 4be774fd73e2 Downloading [=> ] 3.001kB/127.4kB 4be774fd73e2 Downloading [==================================================>] 127.4kB/127.4kB 4be774fd73e2 Verifying Checksum 4be774fd73e2 Download complete 114f99593bd8 Downloading [==================================================>] 1.119kB/1.119kB 114f99593bd8 Verifying Checksum 114f99593bd8 Download complete ad1782e4d1ef Extracting [=============> ] 49.02MB/180.4MB a40760cd2625 Downloading [> ] 539.6kB/84.46MB ecc4de98d537 Extracting [=======================================> ] 59.05MB/73.93MB ecc4de98d537 Extracting [=======================================> ] 59.05MB/73.93MB ecc4de98d537 Extracting [=======================================> ] 59.05MB/73.93MB ecc4de98d537 Extracting [=======================================> ] 59.05MB/73.93MB 10ac4908093d Downloading [> ] 310.2kB/30.43MB 353af139d39e Downloading [=======================================> ] 193MB/246.5MB a40760cd2625 Downloading [===> ] 6.487MB/84.46MB ad1782e4d1ef Extracting [=================> ] 61.83MB/180.4MB 10ac4908093d Downloading [============> ] 7.47MB/30.43MB ecc4de98d537 Extracting [==========================================> ] 62.39MB/73.93MB ecc4de98d537 Extracting [==========================================> ] 62.39MB/73.93MB ecc4de98d537 Extracting [==========================================> ] 62.39MB/73.93MB ecc4de98d537 Extracting [==========================================> ] 62.39MB/73.93MB 353af139d39e Downloading [==========================================> ] 207.6MB/246.5MB a40760cd2625 Downloading [=========> ] 15.68MB/84.46MB ad1782e4d1ef Extracting [===================> ] 70.75MB/180.4MB 10ac4908093d Downloading [================================> ] 19.61MB/30.43MB ecc4de98d537 Extracting [==============================================> ] 68.52MB/73.93MB ecc4de98d537 Extracting [==============================================> ] 68.52MB/73.93MB ecc4de98d537 Extracting [==============================================> ] 68.52MB/73.93MB ecc4de98d537 Extracting [==============================================> ] 68.52MB/73.93MB 353af139d39e Downloading [=============================================> ] 225.5MB/246.5MB a40760cd2625 Downloading [================> ] 28.65MB/84.46MB 10ac4908093d Verifying Checksum 10ac4908093d Download complete ad1782e4d1ef Extracting [======================> ] 81.89MB/180.4MB ecc4de98d537 Extracting [=================================================> ] 73.53MB/73.93MB ecc4de98d537 Extracting [=================================================> ] 73.53MB/73.93MB ecc4de98d537 Extracting [=================================================> ] 73.53MB/73.93MB ecc4de98d537 Extracting [=================================================> ] 73.53MB/73.93MB 44779101e748 Downloading [==================================================>] 1.744kB/1.744kB 44779101e748 Verifying Checksum 44779101e748 Download complete 353af139d39e Downloading [=================================================> ] 243.3MB/246.5MB ecc4de98d537 Extracting [==================================================>] 73.93MB/73.93MB a721db3e3f3d Downloading [> ] 64.45kB/5.526MB ecc4de98d537 Extracting [==================================================>] 73.93MB/73.93MB ecc4de98d537 Extracting [==================================================>] 73.93MB/73.93MB ecc4de98d537 Extracting [==================================================>] 73.93MB/73.93MB 353af139d39e Verifying Checksum 353af139d39e Download complete 1850a929b84a Downloading [==================================================>] 149B/149B 1850a929b84a Verifying Checksum 1850a929b84a Download complete a40760cd2625 Downloading [========================> ] 42.17MB/84.46MB 397a918c7da3 Downloading [==================================================>] 327B/327B 397a918c7da3 Verifying Checksum 397a918c7da3 Download complete 10ac4908093d Extracting [> ] 327.7kB/30.43MB ad1782e4d1ef Extracting [========================> ] 88.01MB/180.4MB ecc4de98d537 Pull complete ecc4de98d537 Pull complete ecc4de98d537 Pull complete ecc4de98d537 Pull complete 806be17e856d Downloading [> ] 539.6kB/89.72MB bda0b253c68f Extracting [==================================================>] 292B/292B 145e9fcd3938 Extracting [==================================================>] 294B/294B 665dfb3388a1 Extracting [==================================================>] 303B/303B 145e9fcd3938 Extracting [==================================================>] 294B/294B 665dfb3388a1 Extracting [==================================================>] 303B/303B a721db3e3f3d Downloading [==============================================> ] 5.111MB/5.526MB a721db3e3f3d Verifying Checksum a721db3e3f3d Download complete bda0b253c68f Extracting [==================================================>] 292B/292B 634de6c90876 Downloading [===========================================> ] 3.011kB/3.49kB 634de6c90876 Downloading [==================================================>] 3.49kB/3.49kB 634de6c90876 Verifying Checksum 634de6c90876 Download complete a40760cd2625 Downloading [================================> ] 55.69MB/84.46MB cd00854cfb1a Downloading [=====================> ] 3.011kB/6.971kB cd00854cfb1a Downloading [==================================================>] 6.971kB/6.971kB cd00854cfb1a Verifying Checksum cd00854cfb1a Download complete 10ac4908093d Extracting [===> ] 2.294MB/30.43MB ad1782e4d1ef Extracting [========================> ] 89.69MB/180.4MB 9fa9226be034 Downloading [> ] 15.3kB/783kB 806be17e856d Downloading [===> ] 5.406MB/89.72MB 9fa9226be034 Downloading [==================================================>] 783kB/783kB 9fa9226be034 Download complete 9fa9226be034 Extracting [==> ] 32.77kB/783kB 1617e25568b2 Downloading [=> ] 15.3kB/480.9kB 1fe734c5fee3 Extracting [> ] 360.4kB/32.94MB 1617e25568b2 Downloading [==================================================>] 480.9kB/480.9kB 1617e25568b2 Verifying Checksum 1617e25568b2 Download complete a40760cd2625 Downloading [======================================> ] 65.42MB/84.46MB 10ac4908093d Extracting [========> ] 5.243MB/30.43MB ec307c9fbf62 Downloading [> ] 539.6kB/55.21MB 145e9fcd3938 Pull complete bda0b253c68f Pull complete 665dfb3388a1 Pull complete 4be774fd73e2 Extracting [============> ] 32.77kB/127.4kB 4be774fd73e2 Extracting [==================================================>] 127.4kB/127.4kB b9357b55a7a5 Extracting [============> ] 32.77kB/127kB 4be774fd73e2 Extracting [==================================================>] 127.4kB/127.4kB b9357b55a7a5 Extracting [==================================================>] 127kB/127kB b9357b55a7a5 Extracting [==================================================>] 127kB/127kB 806be17e856d Downloading [=======> ] 13.52MB/89.72MB ad1782e4d1ef Extracting [=========================> ] 93.03MB/180.4MB 9fa9226be034 Extracting [==============================================> ] 720.9kB/783kB 9fa9226be034 Extracting [==================================================>] 783kB/783kB 1fe734c5fee3 Extracting [==> ] 1.802MB/32.94MB 9fa9226be034 Pull complete a40760cd2625 Downloading [===============================================> ] 79.48MB/84.46MB 10ac4908093d Extracting [=============> ] 8.192MB/30.43MB 1617e25568b2 Extracting [===> ] 32.77kB/480.9kB ec307c9fbf62 Downloading [===> ] 4.324MB/55.21MB 806be17e856d Downloading [==============> ] 25.95MB/89.72MB ad1782e4d1ef Extracting [==========================> ] 95.26MB/180.4MB a40760cd2625 Verifying Checksum a40760cd2625 Download complete d4e715947f0e Downloading [> ] 506.8kB/50.11MB f270a5fd7930 Extracting [> ] 557.1kB/159.1MB 1fe734c5fee3 Extracting [=====> ] 3.604MB/32.94MB ec307c9fbf62 Downloading [===========> ] 12.98MB/55.21MB 10ac4908093d Extracting [=================> ] 10.49MB/30.43MB 4be774fd73e2 Pull complete 806be17e856d Downloading [====================> ] 36.76MB/89.72MB 71f834c33815 Extracting [==================================================>] 1.147kB/1.147kB 71f834c33815 Extracting [==================================================>] 1.147kB/1.147kB ad1782e4d1ef Extracting [===========================> ] 98.6MB/180.4MB 1617e25568b2 Extracting [==================================> ] 327.7kB/480.9kB b9357b55a7a5 Pull complete 4c3047628e17 Extracting [==================================================>] 1.324kB/1.324kB d4e715947f0e Downloading [=======> ] 7.11MB/50.11MB 4c3047628e17 Extracting [==================================================>] 1.324kB/1.324kB 1fe734c5fee3 Extracting [=======> ] 5.046MB/32.94MB f270a5fd7930 Extracting [==> ] 7.799MB/159.1MB ec307c9fbf62 Downloading [======================> ] 24.33MB/55.21MB 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB 10ac4908093d Extracting [====================> ] 12.45MB/30.43MB 806be17e856d Downloading [==========================> ] 48.12MB/89.72MB 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB ad1782e4d1ef Extracting [============================> ] 101.4MB/180.4MB d4e715947f0e Downloading [==============> ] 14.73MB/50.11MB f270a5fd7930 Extracting [====> ] 14.48MB/159.1MB 1fe734c5fee3 Extracting [=========> ] 6.488MB/32.94MB ec307c9fbf62 Downloading [===============================> ] 35.14MB/55.21MB 806be17e856d Downloading [==================================> ] 62.72MB/89.72MB 71f834c33815 Pull complete 10ac4908093d Extracting [========================> ] 14.75MB/30.43MB ad1782e4d1ef Extracting [============================> ] 103.6MB/180.4MB 1617e25568b2 Pull complete d4e715947f0e Downloading [========================> ] 24.38MB/50.11MB f270a5fd7930 Extracting [=====> ] 17.83MB/159.1MB ec307c9fbf62 Downloading [==========================================> ] 47.04MB/55.21MB 1fe734c5fee3 Extracting [============> ] 7.93MB/32.94MB 4c3047628e17 Pull complete 806be17e856d Downloading [=========================================> ] 74.07MB/89.72MB 10ac4908093d Extracting [==============================> ] 18.35MB/30.43MB a40760cd2625 Extracting [> ] 557.1kB/84.46MB d4e715947f0e Downloading [====================================> ] 36.57MB/50.11MB ec307c9fbf62 Verifying Checksum ec307c9fbf62 Download complete ad1782e4d1ef Extracting [=============================> ] 107MB/180.4MB f270a5fd7930 Extracting [=======> ] 25.07MB/159.1MB 1fe734c5fee3 Extracting [=============> ] 9.011MB/32.94MB c522420720c6 Downloading [==================================================>] 604B/604B c522420720c6 Verifying Checksum c522420720c6 Download complete 18d28937c421 Downloading [==================================================>] 2.678kB/2.678kB 18d28937c421 Verifying Checksum 18d28937c421 Download complete 806be17e856d Verifying Checksum 806be17e856d Download complete 6cf350721225 Extracting [> ] 557.1kB/98.32MB 873361efd54d Downloading [================================================> ] 3.011kB/3.087kB 873361efd54d Downloading [==================================================>] 3.087kB/3.087kB 873361efd54d Verifying Checksum 873361efd54d Download complete dd44465db85c Downloading [=====================================> ] 3.011kB/4.02kB dd44465db85c Downloading [==================================================>] 4.02kB/4.02kB dd44465db85c Verifying Checksum dd44465db85c Download complete 10ac4908093d Extracting [===================================> ] 21.63MB/30.43MB 0636908550c9 Downloading [==================================================>] 1.441kB/1.441kB 0636908550c9 Verifying Checksum 0636908550c9 Download complete cd795675b8a2 Downloading [=> ] 3.009kB/139.5kB cd795675b8a2 Download complete d4e715947f0e Downloading [===============================================> ] 47.23MB/50.11MB a40760cd2625 Extracting [==> ] 3.899MB/84.46MB 407f3c6e3260 Downloading [==================================================>] 100B/100B 407f3c6e3260 Download complete 67fb76c620a2 Downloading [==================================================>] 721B/721B 67fb76c620a2 Verifying Checksum 67fb76c620a2 Download complete f270a5fd7930 Extracting [=========> ] 31.75MB/159.1MB d4e715947f0e Verifying Checksum d4e715947f0e Download complete ad1782e4d1ef Extracting [==============================> ] 110.3MB/180.4MB ec307c9fbf62 Extracting [> ] 557.1kB/55.21MB 4abcf2066143 Downloading [> ] 48.06kB/3.409MB 5c277da153ce Downloading [==================================================>] 141B/141B 5c277da153ce Verifying Checksum 5c277da153ce Download complete 1fe734c5fee3 Extracting [===================> ] 12.62MB/32.94MB 85ed0bf0f127 Downloading [> ] 48.06kB/3.184MB a59a4ddf8225 Downloading [> ] 48.06kB/4.333MB 6cf350721225 Extracting [===> ] 6.128MB/98.32MB 4abcf2066143 Verifying Checksum 4abcf2066143 Download complete 4abcf2066143 Extracting [> ] 65.54kB/3.409MB 2d9ac7a96b08 Downloading [===> ] 3.01kB/47.96kB 2d9ac7a96b08 Downloading [==================================================>] 47.96kB/47.96kB 2d9ac7a96b08 Download complete 10ac4908093d Extracting [======================================> ] 23.27MB/30.43MB 85ed0bf0f127 Verifying Checksum 85ed0bf0f127 Download complete c9a66980b76c Downloading [======> ] 3.01kB/23.82kB c9a66980b76c Downloading [==================================================>] 23.82kB/23.82kB c9a66980b76c Verifying Checksum c9a66980b76c Download complete f270a5fd7930 Extracting [===========> ] 37.88MB/159.1MB a59a4ddf8225 Verifying Checksum a59a4ddf8225 Download complete ad1782e4d1ef Extracting [===============================> ] 113.6MB/180.4MB 562cf3de6818 Downloading [> ] 539.6kB/61.52MB a40760cd2625 Extracting [=====> ] 9.47MB/84.46MB ec307c9fbf62 Extracting [===> ] 3.899MB/55.21MB bfcc9123594e Downloading [> ] 506.8kB/50.57MB 1fe734c5fee3 Extracting [=====================> ] 14.42MB/32.94MB f73d5405641d Downloading [============> ] 3.01kB/11.92kB f73d5405641d Downloading [==================================================>] 11.92kB/11.92kB f73d5405641d Verifying Checksum f73d5405641d Download complete 0c9bbf800250 Downloading [==================================================>] 1.225kB/1.225kB 0c9bbf800250 Verifying Checksum 0c9bbf800250 Download complete 6cf350721225 Extracting [=====> ] 10.03MB/98.32MB 4abcf2066143 Extracting [=====> ] 393.2kB/3.409MB 10ac4908093d Extracting [==========================================> ] 25.89MB/30.43MB f270a5fd7930 Extracting [=============> ] 43.45MB/159.1MB ad1782e4d1ef Extracting [================================> ] 115.9MB/180.4MB a40760cd2625 Extracting [========> ] 14.48MB/84.46MB 562cf3de6818 Downloading [====> ] 5.946MB/61.52MB bfcc9123594e Downloading [=======> ] 7.617MB/50.57MB 1fe734c5fee3 Extracting [=========================> ] 16.58MB/32.94MB ec307c9fbf62 Extracting [=====> ] 6.128MB/55.21MB 4798a7e93601 Downloading [> ] 376.8kB/37.11MB 4798a7e93601 Downloading [> ] 376.8kB/37.11MB 6cf350721225 Extracting [========> ] 16.15MB/98.32MB 4abcf2066143 Extracting [=======================================> ] 2.687MB/3.409MB ad1782e4d1ef Extracting [================================> ] 118.1MB/180.4MB bfcc9123594e Downloading [================> ] 16.76MB/50.57MB a40760cd2625 Extracting [===========> ] 18.94MB/84.46MB 562cf3de6818 Downloading [===========> ] 14.6MB/61.52MB 4abcf2066143 Extracting [==================================================>] 3.409MB/3.409MB f270a5fd7930 Extracting [===============> ] 47.91MB/159.1MB 1fe734c5fee3 Extracting [============================> ] 18.74MB/32.94MB 10ac4908093d Extracting [===========================================> ] 26.54MB/30.43MB 4798a7e93601 Downloading [===============> ] 11.72MB/37.11MB 4798a7e93601 Downloading [===============> ] 11.72MB/37.11MB ec307c9fbf62 Extracting [=======> ] 7.799MB/55.21MB 6cf350721225 Extracting [=========> ] 19.5MB/98.32MB 562cf3de6818 Downloading [===================> ] 23.79MB/61.52MB bfcc9123594e Downloading [===========================> ] 27.43MB/50.57MB ad1782e4d1ef Extracting [=================================> ] 120.3MB/180.4MB 4798a7e93601 Downloading [=====================================> ] 28MB/37.11MB 4798a7e93601 Downloading [=====================================> ] 28MB/37.11MB a40760cd2625 Extracting [=============> ] 22.28MB/84.46MB f270a5fd7930 Extracting [================> ] 53.48MB/159.1MB ec307c9fbf62 Extracting [=========> ] 10.03MB/55.21MB 10ac4908093d Extracting [=============================================> ] 27.53MB/30.43MB 6cf350721225 Extracting [===========> ] 21.73MB/98.32MB 1fe734c5fee3 Extracting [==============================> ] 20.19MB/32.94MB bfcc9123594e Downloading [=====================================> ] 38.09MB/50.57MB ad1782e4d1ef Extracting [=================================> ] 122.6MB/180.4MB 562cf3de6818 Downloading [=========================> ] 31.36MB/61.52MB 4798a7e93601 Verifying Checksum 4798a7e93601 Verifying Checksum 4798a7e93601 Download complete 4798a7e93601 Download complete 4abcf2066143 Pull complete 5c277da153ce Extracting [==================================================>] 141B/141B ec307c9fbf62 Extracting [===========> ] 12.26MB/55.21MB f270a5fd7930 Extracting [=================> ] 56.82MB/159.1MB a40760cd2625 Extracting [================> ] 27.3MB/84.46MB 6cf350721225 Extracting [==============> ] 27.85MB/98.32MB 5c277da153ce Extracting [==================================================>] 141B/141B 10ac4908093d Extracting [===============================================> ] 29.16MB/30.43MB ad1782e4d1ef Extracting [==================================> ] 124.2MB/180.4MB 1fe734c5fee3 Extracting [================================> ] 21.27MB/32.94MB bfcc9123594e Downloading [=============================================> ] 45.71MB/50.57MB 562cf3de6818 Downloading [===============================> ] 38.93MB/61.52MB a453f30e82bf Downloading [> ] 539.9kB/257.5MB a453f30e82bf Downloading [> ] 539.9kB/257.5MB 4798a7e93601 Extracting [> ] 393.2kB/37.11MB 4798a7e93601 Extracting [> ] 393.2kB/37.11MB a40760cd2625 Extracting [==================> ] 31.2MB/84.46MB bfcc9123594e Verifying Checksum bfcc9123594e Download complete f270a5fd7930 Extracting [===================> ] 60.72MB/159.1MB 6cf350721225 Extracting [===============> ] 31.2MB/98.32MB 10ac4908093d Extracting [=================================================> ] 30.15MB/30.43MB 1fe734c5fee3 Extracting [=================================> ] 22.35MB/32.94MB 562cf3de6818 Downloading [======================================> ] 47.04MB/61.52MB ad1782e4d1ef Extracting [==================================> ] 125.9MB/180.4MB ec307c9fbf62 Extracting [=============> ] 14.48MB/55.21MB a453f30e82bf Downloading [=> ] 9.169MB/257.5MB a453f30e82bf Downloading [=> ] 9.169MB/257.5MB a40760cd2625 Extracting [=====================> ] 35.65MB/84.46MB 016e383f3f47 Downloading [================================> ] 721B/1.102kB 016e383f3f47 Downloading [================================> ] 721B/1.102kB 4798a7e93601 Extracting [===> ] 2.753MB/37.11MB 4798a7e93601 Extracting [===> ] 2.753MB/37.11MB 016e383f3f47 Download complete 016e383f3f47 Download complete 6cf350721225 Extracting [=================> ] 35.09MB/98.32MB 562cf3de6818 Downloading [=========================================> ] 51.36MB/61.52MB f270a5fd7930 Extracting [====================> ] 65.73MB/159.1MB a453f30e82bf Downloading [==> ] 11.31MB/257.5MB a453f30e82bf Downloading [==> ] 11.31MB/257.5MB 5c277da153ce Pull complete ec307c9fbf62 Extracting [==============> ] 15.6MB/55.21MB 85ed0bf0f127 Extracting [> ] 32.77kB/3.184MB 1fe734c5fee3 Extracting [===================================> ] 23.07MB/32.94MB 10ac4908093d Extracting [==================================================>] 30.43MB/30.43MB ad1782e4d1ef Extracting [===================================> ] 127MB/180.4MB a40760cd2625 Extracting [========================> ] 41.22MB/84.46MB 4798a7e93601 Extracting [=====> ] 3.932MB/37.11MB 4798a7e93601 Extracting [=====> ] 3.932MB/37.11MB 562cf3de6818 Downloading [================================================> ] 60.01MB/61.52MB 6cf350721225 Extracting [===================> ] 38.99MB/98.32MB 562cf3de6818 Verifying Checksum 562cf3de6818 Download complete f7d27dafad0a Downloading [> ] 85.94kB/8.351MB f7d27dafad0a Downloading [> ] 85.94kB/8.351MB ec307c9fbf62 Extracting [===============> ] 17.27MB/55.21MB a453f30e82bf Downloading [====> ] 21.55MB/257.5MB a453f30e82bf Downloading [====> ] 21.55MB/257.5MB a40760cd2625 Extracting [===========================> ] 46.24MB/84.46MB 1fe734c5fee3 Extracting [===================================> ] 23.43MB/32.94MB f270a5fd7930 Extracting [======================> ] 70.75MB/159.1MB 85ed0bf0f127 Extracting [=====> ] 327.7kB/3.184MB ad1782e4d1ef Extracting [===================================> ] 128.7MB/180.4MB 6cf350721225 Extracting [======================> ] 45.12MB/98.32MB 4798a7e93601 Extracting [=========> ] 6.685MB/37.11MB 4798a7e93601 Extracting [=========> ] 6.685MB/37.11MB f7d27dafad0a Downloading [======================================> ] 6.446MB/8.351MB f7d27dafad0a Downloading [======================================> ] 6.446MB/8.351MB 56ccc8be1ca0 Downloading [=> ] 687B/21.29kB 56ccc8be1ca0 Downloading [=> ] 687B/21.29kB a453f30e82bf Downloading [======> ] 31.27MB/257.5MB a453f30e82bf Downloading [======> ] 31.27MB/257.5MB f7d27dafad0a Verifying Checksum 56ccc8be1ca0 Verifying Checksum 56ccc8be1ca0 Download complete f7d27dafad0a Verifying Checksum f7d27dafad0a Download complete 56ccc8be1ca0 Verifying Checksum 56ccc8be1ca0 Download complete f7d27dafad0a Download complete f270a5fd7930 Extracting [=======================> ] 74.09MB/159.1MB 85ed0bf0f127 Extracting [==============> ] 950.3kB/3.184MB a40760cd2625 Extracting [=============================> ] 50.14MB/84.46MB ad1782e4d1ef Extracting [====================================> ] 130.4MB/180.4MB 10ac4908093d Pull complete 44779101e748 Extracting [==================================================>] 1.744kB/1.744kB 44779101e748 Extracting [==================================================>] 1.744kB/1.744kB 1fe734c5fee3 Extracting [====================================> ] 23.79MB/32.94MB 6cf350721225 Extracting [=========================> ] 50.14MB/98.32MB ec307c9fbf62 Extracting [==================> ] 20.61MB/55.21MB 4798a7e93601 Extracting [============> ] 9.044MB/37.11MB 4798a7e93601 Extracting [============> ] 9.044MB/37.11MB 1c6e35a73ed7 Downloading [================================> ] 721B/1.105kB 1c6e35a73ed7 Downloading [================================> ] 721B/1.105kB 1c6e35a73ed7 Downloading [==================================================>] 1.105kB/1.105kB 1c6e35a73ed7 Downloading [==================================================>] 1.105kB/1.105kB 1c6e35a73ed7 Verifying Checksum 1c6e35a73ed7 Download complete 1c6e35a73ed7 Verifying Checksum 1c6e35a73ed7 Download complete 85ed0bf0f127 Extracting [============================================> ] 2.851MB/3.184MB a453f30e82bf Downloading [=======> ] 40.96MB/257.5MB a453f30e82bf Downloading [=======> ] 40.96MB/257.5MB f270a5fd7930 Extracting [========================> ] 78.54MB/159.1MB f77f01ac624c Downloading [> ] 445.7kB/43.2MB f77f01ac624c Downloading [> ] 445.7kB/43.2MB a40760cd2625 Extracting [================================> ] 55.71MB/84.46MB ad1782e4d1ef Extracting [====================================> ] 132.6MB/180.4MB 6cf350721225 Extracting [===========================> ] 54.03MB/98.32MB 1fe734c5fee3 Extracting [=====================================> ] 24.87MB/32.94MB a453f30e82bf Downloading [==========> ] 54.42MB/257.5MB a453f30e82bf Downloading [==========> ] 54.42MB/257.5MB 85ed0bf0f127 Extracting [==================================================>] 3.184MB/3.184MB aa5e151b62ff Downloading [=========================================> ] 710B/853B aa5e151b62ff Downloading [=========================================> ] 710B/853B aa5e151b62ff Downloading [==================================================>] 853B/853B aa5e151b62ff Downloading [==================================================>] 853B/853B aa5e151b62ff Verifying Checksum aa5e151b62ff Verifying Checksum aa5e151b62ff Download complete aa5e151b62ff Download complete 4798a7e93601 Extracting [==============> ] 10.62MB/37.11MB 4798a7e93601 Extracting [==============> ] 10.62MB/37.11MB f77f01ac624c Downloading [============> ] 11.03MB/43.2MB f77f01ac624c Downloading [============> ] 11.03MB/43.2MB f270a5fd7930 Extracting [==========================> ] 83MB/159.1MB ec307c9fbf62 Extracting [===================> ] 21.73MB/55.21MB a40760cd2625 Extracting [==================================> ] 58.49MB/84.46MB ad1782e4d1ef Extracting [=====================================> ] 135.4MB/180.4MB a453f30e82bf Downloading [============> ] 64.63MB/257.5MB a453f30e82bf Downloading [============> ] 64.63MB/257.5MB 262d375318c3 Downloading [==================================================>] 98B/98B 262d375318c3 Downloading [==================================================>] 98B/98B 262d375318c3 Verifying Checksum 262d375318c3 Verifying Checksum 262d375318c3 Download complete 262d375318c3 Download complete 6cf350721225 Extracting [=============================> ] 57.38MB/98.32MB 1fe734c5fee3 Extracting [=======================================> ] 26.31MB/32.94MB f270a5fd7930 Extracting [===========================> ] 87.46MB/159.1MB f77f01ac624c Downloading [=============================> ] 25.16MB/43.2MB f77f01ac624c Downloading [=============================> ] 25.16MB/43.2MB 4798a7e93601 Extracting [=================> ] 12.98MB/37.11MB 4798a7e93601 Extracting [=================> ] 12.98MB/37.11MB a40760cd2625 Extracting [======================================> ] 64.62MB/84.46MB ec307c9fbf62 Extracting [======================> ] 24.51MB/55.21MB ad1782e4d1ef Extracting [=====================================> ] 137MB/180.4MB 28a7d18ebda4 Downloading [==================================================>] 173B/173B 28a7d18ebda4 Downloading [==================================================>] 173B/173B 28a7d18ebda4 Verifying Checksum 28a7d18ebda4 Download complete 28a7d18ebda4 Verifying Checksum 28a7d18ebda4 Download complete 85ed0bf0f127 Pull complete 44779101e748 Pull complete 6cf350721225 Extracting [==============================> ] 60.16MB/98.32MB a59a4ddf8225 Extracting [> ] 65.54kB/4.333MB a453f30e82bf Downloading [==============> ] 72.7MB/257.5MB a453f30e82bf Downloading [==============> ] 72.7MB/257.5MB a721db3e3f3d Extracting [> ] 65.54kB/5.526MB f270a5fd7930 Extracting [============================> ] 91.36MB/159.1MB f77f01ac624c Downloading [==========================================> ] 37.1MB/43.2MB f77f01ac624c Downloading [==========================================> ] 37.1MB/43.2MB 4798a7e93601 Extracting [====================> ] 14.94MB/37.11MB 4798a7e93601 Extracting [====================> ] 14.94MB/37.11MB a40760cd2625 Extracting [========================================> ] 68.52MB/84.46MB f77f01ac624c Verifying Checksum f77f01ac624c Download complete f77f01ac624c Download complete 1fe734c5fee3 Extracting [=========================================> ] 27.39MB/32.94MB ad1782e4d1ef Extracting [======================================> ] 138.7MB/180.4MB ec307c9fbf62 Extracting [========================> ] 27.3MB/55.21MB bdc615dfc787 Downloading [> ] 2.738kB/230.6kB bdc615dfc787 Downloading [> ] 2.738kB/230.6kB bdc615dfc787 Verifying Checksum bdc615dfc787 Download complete bdc615dfc787 Verifying Checksum bdc615dfc787 Download complete 6cf350721225 Extracting [================================> ] 62.95MB/98.32MB a453f30e82bf Downloading [================> ] 85.65MB/257.5MB a453f30e82bf Downloading [================> ] 85.65MB/257.5MB f270a5fd7930 Extracting [==============================> ] 95.81MB/159.1MB 4798a7e93601 Extracting [=======================> ] 17.3MB/37.11MB 4798a7e93601 Extracting [=======================> ] 17.3MB/37.11MB a40760cd2625 Extracting [===========================================> ] 74.09MB/84.46MB ab973a5038b6 Downloading [> ] 539.9kB/121.6MB 1fe734c5fee3 Extracting [============================================> ] 29.56MB/32.94MB ec307c9fbf62 Extracting [==========================> ] 28.97MB/55.21MB a721db3e3f3d Extracting [==> ] 262.1kB/5.526MB 5aee3e0528f7 Downloading [==========> ] 721B/3.445kB 5aee3e0528f7 Downloading [==================================================>] 3.445kB/3.445kB 6cf350721225 Extracting [==================================> ] 67.4MB/98.32MB 5aee3e0528f7 Verifying Checksum 5aee3e0528f7 Download complete a59a4ddf8225 Extracting [===> ] 262.1kB/4.333MB a453f30e82bf Downloading [==================> ] 96.37MB/257.5MB a453f30e82bf Downloading [==================> ] 96.37MB/257.5MB f270a5fd7930 Extracting [===============================> ] 101.4MB/159.1MB ad1782e4d1ef Extracting [=======================================> ] 142MB/180.4MB 4798a7e93601 Extracting [===========================> ] 20.05MB/37.11MB 4798a7e93601 Extracting [===========================> ] 20.05MB/37.11MB a40760cd2625 Extracting [===============================================> ] 80.77MB/84.46MB ab973a5038b6 Downloading [====> ] 10.76MB/121.6MB 1fe734c5fee3 Extracting [=============================================> ] 30.28MB/32.94MB a721db3e3f3d Extracting [===================> ] 2.163MB/5.526MB 6cf350721225 Extracting [===================================> ] 69.63MB/98.32MB ec307c9fbf62 Extracting [===============================> ] 34.54MB/55.21MB a59a4ddf8225 Extracting [==========================> ] 2.294MB/4.333MB a453f30e82bf Downloading [=====================> ] 108.7MB/257.5MB a453f30e82bf Downloading [=====================> ] 108.7MB/257.5MB a40760cd2625 Extracting [==================================================>] 84.46MB/84.46MB f270a5fd7930 Extracting [=================================> ] 107MB/159.1MB 33966fd36306 Downloading [> ] 527.6kB/121.6MB ad1782e4d1ef Extracting [=======================================> ] 144.3MB/180.4MB 4798a7e93601 Extracting [=============================> ] 21.63MB/37.11MB 4798a7e93601 Extracting [=============================> ] 21.63MB/37.11MB ab973a5038b6 Downloading [=======> ] 18.3MB/121.6MB a59a4ddf8225 Extracting [==================================================>] 4.333MB/4.333MB a453f30e82bf Downloading [=======================> ] 121.1MB/257.5MB a453f30e82bf Downloading [=======================> ] 121.1MB/257.5MB a721db3e3f3d Extracting [=====================================> ] 4.129MB/5.526MB 33966fd36306 Downloading [=======> ] 17.21MB/121.6MB ec307c9fbf62 Extracting [==================================> ] 38.44MB/55.21MB ad1782e4d1ef Extracting [========================================> ] 144.8MB/180.4MB f270a5fd7930 Extracting [==================================> ] 109.7MB/159.1MB 6cf350721225 Extracting [====================================> ] 72.42MB/98.32MB 1fe734c5fee3 Extracting [===============================================> ] 31MB/32.94MB a40760cd2625 Pull complete ab973a5038b6 Downloading [============> ] 30.15MB/121.6MB 4798a7e93601 Extracting [==============================> ] 22.81MB/37.11MB 4798a7e93601 Extracting [==============================> ] 22.81MB/37.11MB 114f99593bd8 Extracting [==================================================>] 1.119kB/1.119kB 114f99593bd8 Extracting [==================================================>] 1.119kB/1.119kB a453f30e82bf Downloading [==========================> ] 136.2MB/257.5MB a453f30e82bf Downloading [==========================> ] 136.2MB/257.5MB a59a4ddf8225 Pull complete 33966fd36306 Downloading [===========> ] 28.49MB/121.6MB 2d9ac7a96b08 Extracting [==================================> ] 32.77kB/47.96kB 2d9ac7a96b08 Extracting [==================================================>] 47.96kB/47.96kB a721db3e3f3d Extracting [========================================> ] 4.522MB/5.526MB 1fe734c5fee3 Extracting [==================================================>] 32.94MB/32.94MB f270a5fd7930 Extracting [===================================> ] 114.2MB/159.1MB 6cf350721225 Extracting [=======================================> ] 77.43MB/98.32MB ad1782e4d1ef Extracting [========================================> ] 145.9MB/180.4MB ec307c9fbf62 Extracting [======================================> ] 42.34MB/55.21MB 4798a7e93601 Extracting [================================> ] 24.38MB/37.11MB 4798a7e93601 Extracting [================================> ] 24.38MB/37.11MB ab973a5038b6 Downloading [=================> ] 42.02MB/121.6MB a453f30e82bf Downloading [===========================> ] 143.7MB/257.5MB a453f30e82bf Downloading [===========================> ] 143.7MB/257.5MB 33966fd36306 Downloading [===============> ] 36.54MB/121.6MB ad1782e4d1ef Extracting [=========================================> ] 148.2MB/180.4MB ec307c9fbf62 Extracting [==========================================> ] 46.79MB/55.21MB a721db3e3f3d Extracting [==========================================> ] 4.719MB/5.526MB ab973a5038b6 Downloading [=====================> ] 52.8MB/121.6MB 6cf350721225 Extracting [=========================================> ] 82.44MB/98.32MB 1fe734c5fee3 Pull complete 4798a7e93601 Extracting [===================================> ] 26.35MB/37.11MB 4798a7e93601 Extracting [===================================> ] 26.35MB/37.11MB f270a5fd7930 Extracting [=====================================> ] 118.1MB/159.1MB c8e6f0452a8e Extracting [==================================================>] 1.076kB/1.076kB c8e6f0452a8e Extracting [==================================================>] 1.076kB/1.076kB a453f30e82bf Downloading [==============================> ] 158.3MB/257.5MB a453f30e82bf Downloading [==============================> ] 158.3MB/257.5MB 33966fd36306 Downloading [===================> ] 47.83MB/121.6MB 2d9ac7a96b08 Pull complete 114f99593bd8 Pull complete c9a66980b76c Extracting [==================================================>] 23.82kB/23.82kB c9a66980b76c Extracting [==================================================>] 23.82kB/23.82kB ab973a5038b6 Downloading [=========================> ] 61.97MB/121.6MB ec307c9fbf62 Extracting [===============================================> ] 52.36MB/55.21MB ad1782e4d1ef Extracting [==========================================> ] 152.1MB/180.4MB 6cf350721225 Extracting [============================================> ] 88.01MB/98.32MB api Pulled 4798a7e93601 Extracting [======================================> ] 28.7MB/37.11MB 4798a7e93601 Extracting [======================================> ] 28.7MB/37.11MB f270a5fd7930 Extracting [======================================> ] 121.4MB/159.1MB a453f30e82bf Downloading [===============================> ] 163.1MB/257.5MB a453f30e82bf Downloading [===============================> ] 163.1MB/257.5MB a721db3e3f3d Extracting [===========================================> ] 4.85MB/5.526MB 33966fd36306 Downloading [========================> ] 58.6MB/121.6MB ab973a5038b6 Downloading [============================> ] 69.51MB/121.6MB 6cf350721225 Extracting [===============================================> ] 93.59MB/98.32MB ad1782e4d1ef Extracting [==========================================> ] 154.9MB/180.4MB 4798a7e93601 Extracting [========================================> ] 30.28MB/37.11MB 4798a7e93601 Extracting [========================================> ] 30.28MB/37.11MB f270a5fd7930 Extracting [=======================================> ] 125.3MB/159.1MB a721db3e3f3d Extracting [==================================================>] 5.526MB/5.526MB a453f30e82bf Downloading [=================================> ] 172.8MB/257.5MB a453f30e82bf Downloading [=================================> ] 172.8MB/257.5MB 33966fd36306 Downloading [===========================> ] 67.74MB/121.6MB c9a66980b76c Pull complete c8e6f0452a8e Pull complete 0143f8517101 Extracting [==================================================>] 5.324kB/5.324kB 0143f8517101 Extracting [==================================================>] 5.324kB/5.324kB ab973a5038b6 Downloading [==================================> ] 85.09MB/121.6MB a721db3e3f3d Pull complete ec307c9fbf62 Extracting [=================================================> ] 55.15MB/55.21MB 6cf350721225 Extracting [=================================================> ] 96.93MB/98.32MB 4798a7e93601 Extracting [===========================================> ] 32.64MB/37.11MB 4798a7e93601 Extracting [===========================================> ] 32.64MB/37.11MB 1850a929b84a Extracting [==================================================>] 149B/149B 1850a929b84a Extracting [==================================================>] 149B/149B a453f30e82bf Downloading [====================================> ] 187.4MB/257.5MB a453f30e82bf Downloading [====================================> ] 187.4MB/257.5MB ec307c9fbf62 Extracting [==================================================>] 55.21MB/55.21MB f270a5fd7930 Extracting [========================================> ] 129.2MB/159.1MB 6cf350721225 Extracting [==================================================>] 98.32MB/98.32MB ad1782e4d1ef Extracting [===========================================> ] 156.5MB/180.4MB 33966fd36306 Downloading [==================================> ] 83.29MB/121.6MB 6cf350721225 Pull complete ec307c9fbf62 Pull complete de723b4c7ed9 Extracting [==================================================>] 1.297kB/1.297kB de723b4c7ed9 Extracting [==================================================>] 1.297kB/1.297kB ab973a5038b6 Downloading [=======================================> ] 96.91MB/121.6MB a453f30e82bf Downloading [======================================> ] 200.3MB/257.5MB a453f30e82bf Downloading [======================================> ] 200.3MB/257.5MB f270a5fd7930 Extracting [==========================================> ] 134.8MB/159.1MB 33966fd36306 Downloading [=======================================> ] 95.12MB/121.6MB 1850a929b84a Pull complete 397a918c7da3 Extracting [==================================================>] 327B/327B 562cf3de6818 Extracting [> ] 557.1kB/61.52MB 397a918c7da3 Extracting [==================================================>] 327B/327B 4798a7e93601 Extracting [==============================================> ] 34.21MB/37.11MB 4798a7e93601 Extracting [==============================================> ] 34.21MB/37.11MB ad1782e4d1ef Extracting [============================================> ] 159.9MB/180.4MB ab973a5038b6 Downloading [============================================> ] 109.3MB/121.6MB a453f30e82bf Downloading [========================================> ] 211.1MB/257.5MB a453f30e82bf Downloading [========================================> ] 211.1MB/257.5MB 0143f8517101 Pull complete f270a5fd7930 Extracting [============================================> ] 141.5MB/159.1MB ee69cc1a77e2 Extracting [==================================================>] 5.312kB/5.312kB ee69cc1a77e2 Extracting [==================================================>] 5.312kB/5.312kB de723b4c7ed9 Pull complete d4e715947f0e Extracting [> ] 524.3kB/50.11MB 33966fd36306 Downloading [============================================> ] 107.4MB/121.6MB pap Pulled 4798a7e93601 Extracting [===============================================> ] 35.39MB/37.11MB 4798a7e93601 Extracting [===============================================> ] 35.39MB/37.11MB ad1782e4d1ef Extracting [============================================> ] 162.1MB/180.4MB 562cf3de6818 Extracting [==> ] 2.785MB/61.52MB ab973a5038b6 Verifying Checksum ab973a5038b6 Download complete a453f30e82bf Downloading [==========================================> ] 219.2MB/257.5MB a453f30e82bf Downloading [==========================================> ] 219.2MB/257.5MB f270a5fd7930 Extracting [==============================================> ] 148.2MB/159.1MB 33966fd36306 Downloading [================================================> ] 118.2MB/121.6MB d4e715947f0e Extracting [===> ] 3.146MB/50.11MB 33966fd36306 Verifying Checksum 33966fd36306 Download complete 4798a7e93601 Extracting [=================================================> ] 36.96MB/37.11MB 4798a7e93601 Extracting [=================================================> ] 36.96MB/37.11MB 4798a7e93601 Extracting [==================================================>] 37.11MB/37.11MB 4798a7e93601 Extracting [==================================================>] 37.11MB/37.11MB 8b4455fb60b9 Downloading [=========> ] 721B/3.627kB 8b4455fb60b9 Download complete a453f30e82bf Downloading [===========================================> ] 226.2MB/257.5MB a453f30e82bf Downloading [===========================================> ] 226.2MB/257.5MB ee69cc1a77e2 Pull complete 397a918c7da3 Pull complete 562cf3de6818 Extracting [====> ] 5.014MB/61.52MB 81667b400b57 Extracting [==================================================>] 1.034kB/1.034kB f270a5fd7930 Extracting [===============================================> ] 152.6MB/159.1MB 81667b400b57 Extracting [==================================================>] 1.034kB/1.034kB ad1782e4d1ef Extracting [=============================================> ] 164.3MB/180.4MB d4e715947f0e Extracting [====> ] 4.719MB/50.11MB 4798a7e93601 Pull complete 4798a7e93601 Pull complete a453f30e82bf Downloading [==============================================> ] 240.7MB/257.5MB a453f30e82bf Downloading [==============================================> ] 240.7MB/257.5MB 562cf3de6818 Extracting [=====> ] 6.685MB/61.52MB f270a5fd7930 Extracting [=================================================> ] 156.5MB/159.1MB ad1782e4d1ef Extracting [=============================================> ] 165.4MB/180.4MB f270a5fd7930 Extracting [==================================================>] 159.1MB/159.1MB d4e715947f0e Extracting [======> ] 6.816MB/50.11MB 806be17e856d Extracting [> ] 557.1kB/89.72MB 81667b400b57 Pull complete ec3b6d0cc414 Extracting [==================================================>] 1.036kB/1.036kB ec3b6d0cc414 Extracting [==================================================>] 1.036kB/1.036kB f270a5fd7930 Pull complete 9038eaba24f8 Extracting [==================================================>] 1.153kB/1.153kB 9038eaba24f8 Extracting [==================================================>] 1.153kB/1.153kB a453f30e82bf Downloading [=================================================> ] 253.1MB/257.5MB a453f30e82bf Downloading [=================================================> ] 253.1MB/257.5MB a453f30e82bf Verifying Checksum a453f30e82bf Download complete a453f30e82bf Download complete ad1782e4d1ef Extracting [===============================================> ] 169.9MB/180.4MB 562cf3de6818 Extracting [=======> ] 9.47MB/61.52MB 806be17e856d Extracting [=> ] 3.342MB/89.72MB d4e715947f0e Extracting [=========> ] 9.437MB/50.11MB ec3b6d0cc414 Pull complete a8d3998ab21c Extracting [==================================================>] 13.9kB/13.9kB a8d3998ab21c Extracting [==================================================>] 13.9kB/13.9kB 806be17e856d Extracting [===> ] 6.128MB/89.72MB d4e715947f0e Extracting [===========> ] 11.53MB/50.11MB 9038eaba24f8 Pull complete 04a7796b82ca Extracting [==================================================>] 1.127kB/1.127kB 04a7796b82ca Extracting [==================================================>] 1.127kB/1.127kB a453f30e82bf Extracting [> ] 557.1kB/257.5MB a453f30e82bf Extracting [> ] 557.1kB/257.5MB 562cf3de6818 Extracting [=========> ] 11.14MB/61.52MB ad1782e4d1ef Extracting [===============================================> ] 172.7MB/180.4MB d4e715947f0e Extracting [==============> ] 14.16MB/50.11MB 806be17e856d Extracting [====> ] 7.799MB/89.72MB a453f30e82bf Extracting [=> ] 8.356MB/257.5MB a453f30e82bf Extracting [=> ] 8.356MB/257.5MB 562cf3de6818 Extracting [=========> ] 12.26MB/61.52MB ad1782e4d1ef Extracting [================================================> ] 174.4MB/180.4MB d4e715947f0e Extracting [===============> ] 15.73MB/50.11MB 04a7796b82ca Pull complete a453f30e82bf Extracting [==> ] 15.04MB/257.5MB a453f30e82bf Extracting [==> ] 15.04MB/257.5MB a8d3998ab21c Pull complete 89d6e2ec6372 Extracting [==================================================>] 13.79kB/13.79kB 89d6e2ec6372 Extracting [==================================================>] 13.79kB/13.79kB simulator Pulled 806be17e856d Extracting [=====> ] 9.47MB/89.72MB 562cf3de6818 Extracting [============> ] 15.6MB/61.52MB ad1782e4d1ef Extracting [================================================> ] 176MB/180.4MB a453f30e82bf Extracting [====> ] 23.4MB/257.5MB a453f30e82bf Extracting [====> ] 23.4MB/257.5MB d4e715947f0e Extracting [=================> ] 17.3MB/50.11MB 806be17e856d Extracting [======> ] 11.7MB/89.72MB 562cf3de6818 Extracting [===============> ] 18.94MB/61.52MB ad1782e4d1ef Extracting [=================================================> ] 177.7MB/180.4MB 89d6e2ec6372 Pull complete 80096f8bb25e Extracting [==================================================>] 2.238kB/2.238kB 80096f8bb25e Extracting [==================================================>] 2.238kB/2.238kB 562cf3de6818 Extracting [=================> ] 21.17MB/61.52MB d4e715947f0e Extracting [==================> ] 18.35MB/50.11MB 806be17e856d Extracting [=======> ] 13.37MB/89.72MB ad1782e4d1ef Extracting [=================================================> ] 178.8MB/180.4MB a453f30e82bf Extracting [=====> ] 26.18MB/257.5MB a453f30e82bf Extracting [=====> ] 26.18MB/257.5MB 562cf3de6818 Extracting [==================> ] 22.84MB/61.52MB d4e715947f0e Extracting [=======================> ] 23.07MB/50.11MB 806be17e856d Extracting [=========> ] 16.71MB/89.72MB a453f30e82bf Extracting [=====> ] 27.85MB/257.5MB a453f30e82bf Extracting [=====> ] 27.85MB/257.5MB ad1782e4d1ef Extracting [==================================================>] 180.4MB/180.4MB 80096f8bb25e Pull complete cbd359ebc87d Extracting [==================================================>] 2.23kB/2.23kB cbd359ebc87d Extracting [==================================================>] 2.23kB/2.23kB 562cf3de6818 Extracting [===================> ] 24.51MB/61.52MB d4e715947f0e Extracting [========================> ] 24.64MB/50.11MB a453f30e82bf Extracting [======> ] 33.42MB/257.5MB a453f30e82bf Extracting [======> ] 33.42MB/257.5MB 806be17e856d Extracting [===========> ] 20.05MB/89.72MB d4e715947f0e Extracting [====================================> ] 36.18MB/50.11MB a453f30e82bf Extracting [========> ] 42.89MB/257.5MB a453f30e82bf Extracting [========> ] 42.89MB/257.5MB 806be17e856d Extracting [============> ] 22.28MB/89.72MB 562cf3de6818 Extracting [======================> ] 27.85MB/61.52MB d4e715947f0e Extracting [======================================> ] 38.8MB/50.11MB 806be17e856d Extracting [=============> ] 23.4MB/89.72MB a453f30e82bf Extracting [========> ] 46.24MB/257.5MB a453f30e82bf Extracting [========> ] 46.24MB/257.5MB 562cf3de6818 Extracting [=======================> ] 28.97MB/61.52MB ad1782e4d1ef Pull complete bc8105c6553b Extracting [==================================================>] 84.13kB/84.13kB bc8105c6553b Extracting [==================================================>] 84.13kB/84.13kB cbd359ebc87d Pull complete policy-db-migrator Pulled d4e715947f0e Extracting [=================================================> ] 49.28MB/50.11MB a453f30e82bf Extracting [==========> ] 53.48MB/257.5MB a453f30e82bf Extracting [==========> ] 53.48MB/257.5MB 806be17e856d Extracting [==============> ] 25.62MB/89.72MB 562cf3de6818 Extracting [=========================> ] 31.2MB/61.52MB d4e715947f0e Extracting [==================================================>] 50.11MB/50.11MB bc8105c6553b Pull complete 929241f867bb Extracting [==================================================>] 92B/92B d4e715947f0e Pull complete 929241f867bb Extracting [==================================================>] 92B/92B c522420720c6 Extracting [==================================================>] 604B/604B c522420720c6 Extracting [==================================================>] 604B/604B a453f30e82bf Extracting [============> ] 63.5MB/257.5MB a453f30e82bf Extracting [============> ] 63.5MB/257.5MB 806be17e856d Extracting [================> ] 28.97MB/89.72MB 562cf3de6818 Extracting [============================> ] 35.65MB/61.52MB a453f30e82bf Extracting [==============> ] 73.53MB/257.5MB a453f30e82bf Extracting [==============> ] 73.53MB/257.5MB 929241f867bb Pull complete 37728a7352e6 Extracting [==================================================>] 92B/92B 37728a7352e6 Extracting [==================================================>] 92B/92B c522420720c6 Pull complete 18d28937c421 Extracting [==================================================>] 2.678kB/2.678kB 18d28937c421 Extracting [==================================================>] 2.678kB/2.678kB 562cf3de6818 Extracting [===============================> ] 38.44MB/61.52MB 806be17e856d Extracting [==================> ] 32.31MB/89.72MB a453f30e82bf Extracting [================> ] 83MB/257.5MB a453f30e82bf Extracting [================> ] 83MB/257.5MB 806be17e856d Extracting [===================> ] 35.09MB/89.72MB 562cf3de6818 Extracting [=================================> ] 41.78MB/61.52MB 37728a7352e6 Pull complete 3f40c7aa46a6 Extracting [==================================================>] 302B/302B 3f40c7aa46a6 Extracting [==================================================>] 302B/302B 18d28937c421 Pull complete 873361efd54d Extracting [==================================================>] 3.087kB/3.087kB 873361efd54d Extracting [==================================================>] 3.087kB/3.087kB a453f30e82bf Extracting [=================> ] 89.69MB/257.5MB a453f30e82bf Extracting [=================> ] 89.69MB/257.5MB 562cf3de6818 Extracting [======================================> ] 46.79MB/61.52MB 806be17e856d Extracting [=====================> ] 37.88MB/89.72MB 3f40c7aa46a6 Pull complete a453f30e82bf Extracting [===================> ] 98.6MB/257.5MB a453f30e82bf Extracting [===================> ] 98.6MB/257.5MB 873361efd54d Pull complete dd44465db85c Extracting [==================================================>] 4.02kB/4.02kB dd44465db85c Extracting [==================================================>] 4.02kB/4.02kB 806be17e856d Extracting [======================> ] 39.55MB/89.72MB 562cf3de6818 Extracting [========================================> ] 50.14MB/61.52MB a453f30e82bf Extracting [====================> ] 103.1MB/257.5MB a453f30e82bf Extracting [====================> ] 103.1MB/257.5MB 353af139d39e Extracting [> ] 557.1kB/246.5MB 806be17e856d Extracting [=======================> ] 41.78MB/89.72MB 562cf3de6818 Extracting [==========================================> ] 52.36MB/61.52MB a453f30e82bf Extracting [=====================> ] 109.2MB/257.5MB a453f30e82bf Extracting [=====================> ] 109.2MB/257.5MB 806be17e856d Extracting [========================> ] 43.45MB/89.72MB 562cf3de6818 Extracting [============================================> ] 55.15MB/61.52MB a453f30e82bf Extracting [=====================> ] 113.1MB/257.5MB a453f30e82bf Extracting [=====================> ] 113.1MB/257.5MB 353af139d39e Extracting [> ] 1.114MB/246.5MB 353af139d39e Extracting [==> ] 10.58MB/246.5MB 562cf3de6818 Extracting [===============================================> ] 57.93MB/61.52MB 806be17e856d Extracting [=========================> ] 45.68MB/89.72MB a453f30e82bf Extracting [======================> ] 115.3MB/257.5MB a453f30e82bf Extracting [======================> ] 115.3MB/257.5MB dd44465db85c Pull complete 806be17e856d Extracting [============================> ] 50.69MB/89.72MB 353af139d39e Extracting [===> ] 19.5MB/246.5MB a453f30e82bf Extracting [======================> ] 118.1MB/257.5MB a453f30e82bf Extracting [======================> ] 118.1MB/257.5MB 0636908550c9 Extracting [==================================================>] 1.441kB/1.441kB 0636908550c9 Extracting [==================================================>] 1.441kB/1.441kB 806be17e856d Extracting [=============================> ] 53.48MB/89.72MB 353af139d39e Extracting [====> ] 22.28MB/246.5MB a453f30e82bf Extracting [=======================> ] 119.2MB/257.5MB a453f30e82bf Extracting [=======================> ] 119.2MB/257.5MB 562cf3de6818 Extracting [================================================> ] 59.6MB/61.52MB 562cf3de6818 Extracting [==================================================>] 61.52MB/61.52MB 353af139d39e Extracting [=====> ] 27.3MB/246.5MB a453f30e82bf Extracting [=======================> ] 122.6MB/257.5MB a453f30e82bf Extracting [=======================> ] 122.6MB/257.5MB 806be17e856d Extracting [================================> ] 57.93MB/89.72MB 353af139d39e Extracting [======> ] 31.75MB/246.5MB 806be17e856d Extracting [=================================> ] 59.6MB/89.72MB a453f30e82bf Extracting [========================> ] 125.9MB/257.5MB a453f30e82bf Extracting [========================> ] 125.9MB/257.5MB 353af139d39e Extracting [========> ] 40.67MB/246.5MB 806be17e856d Extracting [===================================> ] 62.95MB/89.72MB a453f30e82bf Extracting [=========================> ] 130.4MB/257.5MB a453f30e82bf Extracting [=========================> ] 130.4MB/257.5MB 353af139d39e Extracting [=========> ] 47.91MB/246.5MB a453f30e82bf Extracting [==========================> ] 135.9MB/257.5MB a453f30e82bf Extracting [==========================> ] 135.9MB/257.5MB 806be17e856d Extracting [=====================================> ] 67.4MB/89.72MB 353af139d39e Extracting [===========> ] 55.71MB/246.5MB a453f30e82bf Extracting [===========================> ] 140.9MB/257.5MB a453f30e82bf Extracting [===========================> ] 140.9MB/257.5MB 806be17e856d Extracting [======================================> ] 69.63MB/89.72MB 353af139d39e Extracting [=============> ] 65.73MB/246.5MB a453f30e82bf Extracting [============================> ] 144.8MB/257.5MB a453f30e82bf Extracting [============================> ] 144.8MB/257.5MB 353af139d39e Extracting [=============> ] 67.96MB/246.5MB 806be17e856d Extracting [========================================> ] 72.42MB/89.72MB a453f30e82bf Extracting [============================> ] 145.4MB/257.5MB a453f30e82bf Extracting [============================> ] 145.4MB/257.5MB 353af139d39e Extracting [===============> ] 76.32MB/246.5MB 806be17e856d Extracting [========================================> ] 73.53MB/89.72MB a453f30e82bf Extracting [============================> ] 147.1MB/257.5MB a453f30e82bf Extracting [============================> ] 147.1MB/257.5MB 353af139d39e Extracting [=================> ] 85.79MB/246.5MB a453f30e82bf Extracting [=============================> ] 149.8MB/257.5MB a453f30e82bf Extracting [=============================> ] 149.8MB/257.5MB 806be17e856d Extracting [==========================================> ] 76.32MB/89.72MB 0636908550c9 Pull complete 562cf3de6818 Pull complete 353af139d39e Extracting [==================> ] 93.59MB/246.5MB a453f30e82bf Extracting [=============================> ] 153.2MB/257.5MB a453f30e82bf Extracting [=============================> ] 153.2MB/257.5MB 806be17e856d Extracting [============================================> ] 79.1MB/89.72MB 353af139d39e Extracting [====================> ] 100.8MB/246.5MB a453f30e82bf Extracting [==============================> ] 157.1MB/257.5MB a453f30e82bf Extracting [==============================> ] 157.1MB/257.5MB 806be17e856d Extracting [=============================================> ] 81.89MB/89.72MB 353af139d39e Extracting [=====================> ] 107MB/246.5MB a453f30e82bf Extracting [===============================> ] 161MB/257.5MB a453f30e82bf Extracting [===============================> ] 161MB/257.5MB 806be17e856d Extracting [==============================================> ] 83.56MB/89.72MB 353af139d39e Extracting [=======================> ] 113.6MB/246.5MB a453f30e82bf Extracting [===============================> ] 162.7MB/257.5MB a453f30e82bf Extracting [===============================> ] 162.7MB/257.5MB 806be17e856d Extracting [===============================================> ] 84.67MB/89.72MB 353af139d39e Extracting [========================> ] 123.1MB/246.5MB 806be17e856d Extracting [===============================================> ] 85.79MB/89.72MB a453f30e82bf Extracting [================================> ] 167.1MB/257.5MB a453f30e82bf Extracting [================================> ] 167.1MB/257.5MB cd795675b8a2 Extracting [===========> ] 32.77kB/139.5kB 353af139d39e Extracting [==========================> ] 130.4MB/246.5MB cd795675b8a2 Extracting [==================================================>] 139.5kB/139.5kB cd795675b8a2 Extracting [==================================================>] 139.5kB/139.5kB a453f30e82bf Extracting [================================> ] 168.2MB/257.5MB a453f30e82bf Extracting [================================> ] 168.2MB/257.5MB 806be17e856d Extracting [================================================> ] 86.34MB/89.72MB 353af139d39e Extracting [============================> ] 138.7MB/246.5MB a453f30e82bf Extracting [=================================> ] 170.5MB/257.5MB a453f30e82bf Extracting [=================================> ] 170.5MB/257.5MB 806be17e856d Extracting [================================================> ] 87.46MB/89.72MB 353af139d39e Extracting [============================> ] 142.6MB/246.5MB a453f30e82bf Extracting [=================================> ] 172.7MB/257.5MB a453f30e82bf Extracting [=================================> ] 172.7MB/257.5MB 353af139d39e Extracting [==============================> ] 148.2MB/246.5MB 353af139d39e Extracting [===============================> ] 154.9MB/246.5MB a453f30e82bf Extracting [=================================> ] 173.2MB/257.5MB a453f30e82bf Extracting [=================================> ] 173.2MB/257.5MB 806be17e856d Extracting [=================================================> ] 89.13MB/89.72MB 353af139d39e Extracting [================================> ] 161MB/246.5MB 806be17e856d Extracting [==================================================>] 89.72MB/89.72MB 353af139d39e Extracting [=================================> ] 163.2MB/246.5MB cd795675b8a2 Pull complete 407f3c6e3260 Extracting [==================================================>] 100B/100B 407f3c6e3260 Extracting [==================================================>] 100B/100B a453f30e82bf Extracting [=================================> ] 174.9MB/257.5MB a453f30e82bf Extracting [=================================> ] 174.9MB/257.5MB 806be17e856d Pull complete 353af139d39e Extracting [==================================> ] 168.2MB/246.5MB 634de6c90876 Extracting [==================================================>] 3.49kB/3.49kB 634de6c90876 Extracting [==================================================>] 3.49kB/3.49kB 407f3c6e3260 Pull complete 67fb76c620a2 Extracting [==================================================>] 721B/721B 67fb76c620a2 Extracting [==================================================>] 721B/721B a453f30e82bf Extracting [==================================> ] 176MB/257.5MB a453f30e82bf Extracting [==================================> ] 176MB/257.5MB bfcc9123594e Extracting [> ] 524.3kB/50.57MB 353af139d39e Extracting [===================================> ] 176.6MB/246.5MB a453f30e82bf Extracting [==================================> ] 176.6MB/257.5MB a453f30e82bf Extracting [==================================> ] 176.6MB/257.5MB 634de6c90876 Pull complete cd00854cfb1a Extracting [==================================================>] 6.971kB/6.971kB cd00854cfb1a Extracting [==================================================>] 6.971kB/6.971kB 353af139d39e Extracting [====================================> ] 178.8MB/246.5MB 67fb76c620a2 Pull complete prometheus Pulled bfcc9123594e Extracting [=> ] 1.049MB/50.57MB a453f30e82bf Extracting [==================================> ] 177.7MB/257.5MB a453f30e82bf Extracting [==================================> ] 177.7MB/257.5MB 353af139d39e Extracting [=====================================> ] 184.9MB/246.5MB cd00854cfb1a Pull complete 353af139d39e Extracting [=======================================> ] 193.3MB/246.5MB a453f30e82bf Extracting [==================================> ] 178.3MB/257.5MB a453f30e82bf Extracting [==================================> ] 178.3MB/257.5MB bfcc9123594e Extracting [=> ] 1.573MB/50.57MB 353af139d39e Extracting [========================================> ] 199.4MB/246.5MB a453f30e82bf Extracting [===================================> ] 180.5MB/257.5MB a453f30e82bf Extracting [===================================> ] 180.5MB/257.5MB 353af139d39e Extracting [==========================================> ] 208.9MB/246.5MB 353af139d39e Extracting [==========================================> ] 210.6MB/246.5MB a453f30e82bf Extracting [===================================> ] 181.6MB/257.5MB a453f30e82bf Extracting [===================================> ] 181.6MB/257.5MB bfcc9123594e Extracting [==> ] 2.097MB/50.57MB 353af139d39e Extracting [===========================================> ] 214.5MB/246.5MB a453f30e82bf Extracting [===================================> ] 184.9MB/257.5MB a453f30e82bf Extracting [===================================> ] 184.9MB/257.5MB bfcc9123594e Extracting [==> ] 2.621MB/50.57MB a453f30e82bf Extracting [====================================> ] 186.6MB/257.5MB a453f30e82bf Extracting [====================================> ] 186.6MB/257.5MB 353af139d39e Extracting [============================================> ] 220MB/246.5MB a453f30e82bf Extracting [====================================> ] 189.4MB/257.5MB a453f30e82bf Extracting [====================================> ] 189.4MB/257.5MB 353af139d39e Extracting [=============================================> ] 226.2MB/246.5MB bfcc9123594e Extracting [===> ] 3.67MB/50.57MB 353af139d39e Extracting [==============================================> ] 231.2MB/246.5MB a453f30e82bf Extracting [=====================================> ] 191.6MB/257.5MB a453f30e82bf Extracting [=====================================> ] 191.6MB/257.5MB bfcc9123594e Extracting [====> ] 4.194MB/50.57MB 353af139d39e Extracting [================================================> ] 237.9MB/246.5MB bfcc9123594e Extracting [======> ] 6.816MB/50.57MB a453f30e82bf Extracting [=====================================> ] 193.3MB/257.5MB a453f30e82bf Extracting [=====================================> ] 193.3MB/257.5MB 353af139d39e Extracting [=================================================> ] 246.2MB/246.5MB 353af139d39e Extracting [==================================================>] 246.5MB/246.5MB bfcc9123594e Extracting [========> ] 8.389MB/50.57MB a453f30e82bf Extracting [=====================================> ] 194.4MB/257.5MB a453f30e82bf Extracting [=====================================> ] 194.4MB/257.5MB mariadb Pulled bfcc9123594e Extracting [========> ] 8.913MB/50.57MB bfcc9123594e Extracting [=========> ] 9.437MB/50.57MB a453f30e82bf Extracting [=====================================> ] 195MB/257.5MB a453f30e82bf Extracting [=====================================> ] 195MB/257.5MB bfcc9123594e Extracting [==========> ] 10.49MB/50.57MB bfcc9123594e Extracting [==========> ] 11.01MB/50.57MB 353af139d39e Pull complete a453f30e82bf Extracting [======================================> ] 197.8MB/257.5MB a453f30e82bf Extracting [======================================> ] 197.8MB/257.5MB bfcc9123594e Extracting [===========> ] 11.53MB/50.57MB a453f30e82bf Extracting [======================================> ] 200.5MB/257.5MB a453f30e82bf Extracting [======================================> ] 200.5MB/257.5MB bfcc9123594e Extracting [==============> ] 14.68MB/50.57MB a453f30e82bf Extracting [=======================================> ] 202.8MB/257.5MB a453f30e82bf Extracting [=======================================> ] 202.8MB/257.5MB bfcc9123594e Extracting [==================> ] 18.35MB/50.57MB a453f30e82bf Extracting [=======================================> ] 203.3MB/257.5MB a453f30e82bf Extracting [=======================================> ] 203.3MB/257.5MB bfcc9123594e Extracting [===================> ] 19.92MB/50.57MB a453f30e82bf Extracting [=======================================> ] 205MB/257.5MB a453f30e82bf Extracting [=======================================> ] 205MB/257.5MB bfcc9123594e Extracting [======================> ] 22.54MB/50.57MB a453f30e82bf Extracting [========================================> ] 206.1MB/257.5MB a453f30e82bf Extracting [========================================> ] 206.1MB/257.5MB bfcc9123594e Extracting [========================> ] 25.17MB/50.57MB a453f30e82bf Extracting [========================================> ] 208.3MB/257.5MB a453f30e82bf Extracting [========================================> ] 208.3MB/257.5MB bfcc9123594e Extracting [==========================> ] 27.26MB/50.57MB a453f30e82bf Extracting [========================================> ] 210MB/257.5MB a453f30e82bf Extracting [========================================> ] 210MB/257.5MB bfcc9123594e Extracting [=============================> ] 29.88MB/50.57MB a453f30e82bf Extracting [=========================================> ] 212.2MB/257.5MB a453f30e82bf Extracting [=========================================> ] 212.2MB/257.5MB bfcc9123594e Extracting [================================> ] 32.51MB/50.57MB apex-pdp Pulled a453f30e82bf Extracting [=========================================> ] 215.6MB/257.5MB a453f30e82bf Extracting [=========================================> ] 215.6MB/257.5MB bfcc9123594e Extracting [==================================> ] 34.6MB/50.57MB a453f30e82bf Extracting [==========================================> ] 218.4MB/257.5MB a453f30e82bf Extracting [==========================================> ] 218.4MB/257.5MB bfcc9123594e Extracting [=====================================> ] 37.75MB/50.57MB a453f30e82bf Extracting [===========================================> ] 221.7MB/257.5MB a453f30e82bf Extracting [===========================================> ] 221.7MB/257.5MB bfcc9123594e Extracting [========================================> ] 40.89MB/50.57MB a453f30e82bf Extracting [===========================================> ] 223.9MB/257.5MB a453f30e82bf Extracting [===========================================> ] 223.9MB/257.5MB bfcc9123594e Extracting [===========================================> ] 43.52MB/50.57MB a453f30e82bf Extracting [===========================================> ] 225.6MB/257.5MB a453f30e82bf Extracting [===========================================> ] 225.6MB/257.5MB a453f30e82bf Extracting [============================================> ] 226.7MB/257.5MB a453f30e82bf Extracting [============================================> ] 226.7MB/257.5MB a453f30e82bf Extracting [============================================> ] 227.3MB/257.5MB a453f30e82bf Extracting [============================================> ] 227.3MB/257.5MB bfcc9123594e Extracting [================================================> ] 48.76MB/50.57MB a453f30e82bf Extracting [============================================> ] 227.8MB/257.5MB a453f30e82bf Extracting [============================================> ] 227.8MB/257.5MB bfcc9123594e Extracting [================================================> ] 49.28MB/50.57MB bfcc9123594e Extracting [==================================================>] 50.57MB/50.57MB a453f30e82bf Extracting [============================================> ] 229.5MB/257.5MB a453f30e82bf Extracting [============================================> ] 229.5MB/257.5MB a453f30e82bf Extracting [=============================================> ] 232.8MB/257.5MB a453f30e82bf Extracting [=============================================> ] 232.8MB/257.5MB a453f30e82bf Extracting [=============================================> ] 234.5MB/257.5MB a453f30e82bf Extracting [=============================================> ] 234.5MB/257.5MB a453f30e82bf Extracting [==============================================> ] 240.1MB/257.5MB a453f30e82bf Extracting [==============================================> ] 240.1MB/257.5MB a453f30e82bf Extracting [==============================================> ] 241.8MB/257.5MB a453f30e82bf Extracting [==============================================> ] 241.8MB/257.5MB a453f30e82bf Extracting [================================================> ] 249.6MB/257.5MB a453f30e82bf Extracting [================================================> ] 249.6MB/257.5MB a453f30e82bf Extracting [=================================================> ] 254.6MB/257.5MB a453f30e82bf Extracting [=================================================> ] 254.6MB/257.5MB a453f30e82bf Extracting [==================================================>] 257.5MB/257.5MB a453f30e82bf Extracting [==================================================>] 257.5MB/257.5MB bfcc9123594e Pull complete f73d5405641d Extracting [==================================================>] 11.92kB/11.92kB f73d5405641d Extracting [==================================================>] 11.92kB/11.92kB a453f30e82bf Pull complete a453f30e82bf Pull complete f73d5405641d Pull complete 016e383f3f47 Extracting [==================================================>] 1.102kB/1.102kB 016e383f3f47 Extracting [==================================================>] 1.102kB/1.102kB 016e383f3f47 Extracting [==================================================>] 1.102kB/1.102kB 016e383f3f47 Extracting [==================================================>] 1.102kB/1.102kB 0c9bbf800250 Extracting [==================================================>] 1.225kB/1.225kB 0c9bbf800250 Extracting [==================================================>] 1.225kB/1.225kB 016e383f3f47 Pull complete 016e383f3f47 Pull complete 0c9bbf800250 Pull complete f7d27dafad0a Extracting [> ] 98.3kB/8.351MB f7d27dafad0a Extracting [> ] 98.3kB/8.351MB f7d27dafad0a Extracting [===============> ] 2.556MB/8.351MB f7d27dafad0a Extracting [===============> ] 2.556MB/8.351MB f7d27dafad0a Extracting [==================================================>] 8.351MB/8.351MB f7d27dafad0a Extracting [==================================================>] 8.351MB/8.351MB f7d27dafad0a Extracting [==================================================>] 8.351MB/8.351MB f7d27dafad0a Extracting [==================================================>] 8.351MB/8.351MB f7d27dafad0a Pull complete f7d27dafad0a Pull complete grafana Pulled 56ccc8be1ca0 Extracting [==================================================>] 21.29kB/21.29kB 56ccc8be1ca0 Extracting [==================================================>] 21.29kB/21.29kB 56ccc8be1ca0 Extracting [==================================================>] 21.29kB/21.29kB 56ccc8be1ca0 Extracting [==================================================>] 21.29kB/21.29kB 56ccc8be1ca0 Pull complete 56ccc8be1ca0 Pull complete f77f01ac624c Extracting [> ] 458.8kB/43.2MB f77f01ac624c Extracting [> ] 458.8kB/43.2MB f77f01ac624c Extracting [=================> ] 15.14MB/43.2MB f77f01ac624c Extracting [=================> ] 15.14MB/43.2MB f77f01ac624c Extracting [===================================> ] 30.74MB/43.2MB f77f01ac624c Extracting [===================================> ] 30.74MB/43.2MB f77f01ac624c Extracting [==================================================>] 43.2MB/43.2MB f77f01ac624c Extracting [==================================================>] 43.2MB/43.2MB f77f01ac624c Pull complete f77f01ac624c Pull complete 1c6e35a73ed7 Extracting [==================================================>] 1.105kB/1.105kB 1c6e35a73ed7 Extracting [==================================================>] 1.105kB/1.105kB 1c6e35a73ed7 Extracting [==================================================>] 1.105kB/1.105kB 1c6e35a73ed7 Extracting [==================================================>] 1.105kB/1.105kB 1c6e35a73ed7 Pull complete 1c6e35a73ed7 Pull complete aa5e151b62ff Extracting [==================================================>] 853B/853B aa5e151b62ff Extracting [==================================================>] 853B/853B aa5e151b62ff Extracting [==================================================>] 853B/853B aa5e151b62ff Extracting [==================================================>] 853B/853B aa5e151b62ff Pull complete aa5e151b62ff Pull complete 262d375318c3 Extracting [==================================================>] 98B/98B 262d375318c3 Extracting [==================================================>] 98B/98B 262d375318c3 Extracting [==================================================>] 98B/98B 262d375318c3 Extracting [==================================================>] 98B/98B 262d375318c3 Pull complete 262d375318c3 Pull complete 28a7d18ebda4 Extracting [==================================================>] 173B/173B 28a7d18ebda4 Extracting [==================================================>] 173B/173B 28a7d18ebda4 Extracting [==================================================>] 173B/173B 28a7d18ebda4 Extracting [==================================================>] 173B/173B 28a7d18ebda4 Pull complete 28a7d18ebda4 Pull complete bdc615dfc787 Extracting [=======> ] 32.77kB/230.6kB bdc615dfc787 Extracting [=======> ] 32.77kB/230.6kB bdc615dfc787 Extracting [==================================================>] 230.6kB/230.6kB bdc615dfc787 Extracting [==================================================>] 230.6kB/230.6kB bdc615dfc787 Pull complete bdc615dfc787 Pull complete ab973a5038b6 Extracting [> ] 557.1kB/121.6MB 33966fd36306 Extracting [> ] 557.1kB/121.6MB ab973a5038b6 Extracting [====> ] 11.7MB/121.6MB 33966fd36306 Extracting [====> ] 10.58MB/121.6MB 33966fd36306 Extracting [=========> ] 22.84MB/121.6MB ab973a5038b6 Extracting [========> ] 21.17MB/121.6MB 33966fd36306 Extracting [=============> ] 31.75MB/121.6MB ab973a5038b6 Extracting [===============> ] 36.77MB/121.6MB 33966fd36306 Extracting [===================> ] 46.79MB/121.6MB ab973a5038b6 Extracting [======================> ] 54.03MB/121.6MB 33966fd36306 Extracting [==========================> ] 63.5MB/121.6MB ab973a5038b6 Extracting [============================> ] 69.63MB/121.6MB ab973a5038b6 Extracting [===================================> ] 85.79MB/121.6MB 33966fd36306 Extracting [================================> ] 80.22MB/121.6MB 33966fd36306 Extracting [=======================================> ] 95.81MB/121.6MB ab973a5038b6 Extracting [=========================================> ] 99.71MB/121.6MB ab973a5038b6 Extracting [==============================================> ] 113.6MB/121.6MB 33966fd36306 Extracting [============================================> ] 109.2MB/121.6MB ab973a5038b6 Extracting [=================================================> ] 119.8MB/121.6MB 33966fd36306 Extracting [================================================> ] 118.1MB/121.6MB ab973a5038b6 Extracting [==================================================>] 121.6MB/121.6MB 33966fd36306 Extracting [==================================================>] 121.6MB/121.6MB 33966fd36306 Pull complete ab973a5038b6 Pull complete 8b4455fb60b9 Extracting [==================================================>] 3.627kB/3.627kB 8b4455fb60b9 Extracting [==================================================>] 3.627kB/3.627kB 5aee3e0528f7 Extracting [==================================================>] 3.445kB/3.445kB 5aee3e0528f7 Extracting [==================================================>] 3.445kB/3.445kB 8b4455fb60b9 Pull complete 5aee3e0528f7 Pull complete kafka Pulled zookeeper Pulled Network compose_default Creating Network compose_default Created Container zookeeper Creating Container prometheus Creating Container simulator Creating Container mariadb Creating Container mariadb Created Container simulator Created Container policy-db-migrator Creating Container prometheus Created Container grafana Creating Container zookeeper Created Container kafka Creating Container grafana Created Container policy-db-migrator Created Container policy-api Creating Container kafka Created Container policy-api Created Container policy-pap Creating Container policy-pap Created Container policy-apex-pdp Creating Container policy-apex-pdp Created Container mariadb Starting Container prometheus Starting Container simulator Starting Container zookeeper Starting Container zookeeper Started Container kafka Starting Container kafka Started Container simulator Started Container prometheus Started Container grafana Starting Container grafana Started Container mariadb Started Container policy-db-migrator Starting Container policy-db-migrator Started Container policy-api Starting Container policy-api Started Container policy-pap Starting Container policy-pap Started Container policy-apex-pdp Starting Container policy-apex-pdp Started Prometheus server: http://localhost:30259 Grafana server: http://localhost:30269 Waiting for REST to come up on localhost port 30003... NAMES STATUS policy-apex-pdp Up 10 seconds policy-pap Up 10 seconds policy-api Up 11 seconds grafana Up 14 seconds kafka Up 16 seconds policy-db-migrator Up 12 seconds zookeeper Up 17 seconds simulator Up 16 seconds mariadb Up 13 seconds prometheus Up 15 seconds NAMES STATUS policy-apex-pdp Up 15 seconds policy-pap Up 16 seconds policy-api Up 16 seconds grafana Up 19 seconds kafka Up 21 seconds zookeeper Up 22 seconds simulator Up 21 seconds mariadb Up 18 seconds prometheus Up 20 seconds NAMES STATUS policy-apex-pdp Up 20 seconds policy-pap Up 21 seconds policy-api Up 21 seconds grafana Up 24 seconds kafka Up 26 seconds zookeeper Up 27 seconds simulator Up 26 seconds mariadb Up 23 seconds prometheus Up 25 seconds NAMES STATUS policy-apex-pdp Up 25 seconds policy-pap Up 26 seconds policy-api Up 26 seconds grafana Up 29 seconds kafka Up 31 seconds zookeeper Up 32 seconds simulator Up 31 seconds mariadb Up 28 seconds prometheus Up 30 seconds NAMES STATUS policy-apex-pdp Up 30 seconds policy-pap Up 31 seconds policy-api Up 31 seconds grafana Up 34 seconds kafka Up 36 seconds zookeeper Up 37 seconds simulator Up 36 seconds mariadb Up 33 seconds prometheus Up 35 seconds NAMES STATUS policy-apex-pdp Up 35 seconds policy-pap Up 36 seconds policy-api Up 37 seconds grafana Up 39 seconds kafka Up 42 seconds zookeeper Up 42 seconds simulator Up 41 seconds mariadb Up 38 seconds prometheus Up 40 seconds Build docker image for robot framework Error: No such image: policy-csit-robot Cloning into '/w/workspace/policy-pap-newdelhi-project-csit-pap/csit/resources/tests/models'... Build robot framework docker image Sending build context to Docker daemon 16.14MB Step 1/9 : FROM nexus3.onap.org:10001/library/python:3.10-slim-bullseye 3.10-slim-bullseye: Pulling from library/python 5de87e84afee: Pulling fs layer d15b3ae3d80d: Pulling fs layer 9f03957e89cd: Pulling fs layer b1526d5da331: Pulling fs layer 83e22f7aee86: Pulling fs layer 83e22f7aee86: Waiting b1526d5da331: Waiting d15b3ae3d80d: Download complete b1526d5da331: Verifying Checksum b1526d5da331: Download complete 9f03957e89cd: Verifying Checksum 9f03957e89cd: Download complete 83e22f7aee86: Verifying Checksum 83e22f7aee86: Download complete 5de87e84afee: Download complete 5de87e84afee: Pull complete d15b3ae3d80d: Pull complete 9f03957e89cd: Pull complete b1526d5da331: Pull complete 83e22f7aee86: Pull complete Digest: sha256:aefd4c44c655edfa34854a36bc3b50391e96e3cff9d967e836f4c7c5b9c3c122 Status: Downloaded newer image for nexus3.onap.org:10001/library/python:3.10-slim-bullseye ---> e561be79657d Step 2/9 : ARG CSIT_SCRIPT=${CSIT_SCRIPT} ---> Running in 038258685a7e Removing intermediate container 038258685a7e ---> ee8efaef0315 Step 3/9 : ARG ROBOT_FILE=${ROBOT_FILE} ---> Running in 8c650df95ad5 Removing intermediate container 8c650df95ad5 ---> 56e0e199b84e Step 4/9 : ENV ROBOT_WORKSPACE=/opt/robotworkspace ROBOT_FILE=$ROBOT_FILE CLAMP_K8S_TEST=$CLAMP_K8S_TEST ---> Running in dbf2bca18826 Removing intermediate container dbf2bca18826 ---> 9a415029304f Step 5/9 : RUN python3 -m pip -qq install --upgrade pip && python3 -m pip -qq install --upgrade --extra-index-url="https://nexus3.onap.org/repository/PyPi.staging/simple" 'robotframework-onap==0.6.0.*' --pre && python3 -m pip -qq install --upgrade confluent-kafka && python3 -m pip freeze ---> Running in 67c62519b796 bcrypt==4.2.0 certifi==2024.7.4 cffi==1.17.0rc1 charset-normalizer==3.3.2 confluent-kafka==2.5.0 cryptography==43.0.0 decorator==5.1.1 deepdiff==7.0.1 dnspython==2.6.1 future==1.0.0 idna==3.7 Jinja2==3.1.4 jsonpath-rw==1.4.0 kafka-python==2.0.2 MarkupSafe==2.1.5 more-itertools==5.0.0 ordered-set==4.1.0 paramiko==3.4.0 pbr==6.0.0 ply==3.11 protobuf==5.28.0rc1 pycparser==2.22 PyNaCl==1.5.0 PyYAML==6.0.2rc1 requests==2.32.3 robotframework==7.0.1 robotframework-onap==0.6.0.dev105 robotframework-requests==1.0a11 robotlibcore-temp==1.0.2 six==1.16.0 urllib3==2.2.2 Removing intermediate container 67c62519b796 ---> 9a3d63cdaf5b Step 6/9 : RUN mkdir -p ${ROBOT_WORKSPACE} ---> Running in 33ede0d1a8e2 Removing intermediate container 33ede0d1a8e2 ---> 3a2237b9630b Step 7/9 : COPY scripts/run-test.sh tests/ ${ROBOT_WORKSPACE}/ ---> b586f8b79ed2 Step 8/9 : WORKDIR ${ROBOT_WORKSPACE} ---> Running in d176973acb3b Removing intermediate container d176973acb3b ---> 8da618871d61 Step 9/9 : CMD ["sh", "-c", "./run-test.sh" ] ---> Running in 4b866935e97b Removing intermediate container 4b866935e97b ---> a2928a44e090 Successfully built a2928a44e090 Successfully tagged policy-csit-robot:latest top - 17:02:57 up 3 min, 0 users, load average: 2.97, 1.75, 0.72 Tasks: 210 total, 1 running, 131 sleeping, 0 stopped, 0 zombie %Cpu(s): 15.3 us, 3.7 sy, 0.0 ni, 75.5 id, 5.3 wa, 0.0 hi, 0.1 si, 0.1 st total used free shared buff/cache available Mem: 31G 2.8G 22G 1.3M 6.1G 28G Swap: 1.0G 0B 1.0G NAMES STATUS policy-apex-pdp Up About a minute policy-pap Up About a minute policy-api Up About a minute grafana Up About a minute kafka Up About a minute zookeeper Up About a minute simulator Up About a minute mariadb Up About a minute prometheus Up About a minute CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS e916902110ed policy-apex-pdp 1.46% 175.7MiB / 31.41GiB 0.55% 27kB / 40.2kB 0B / 0B 50 6ecd62cf2796 policy-pap 2.04% 560.4MiB / 31.41GiB 1.74% 110kB / 132kB 0B / 149MB 62 57f58bed2555 policy-api 0.16% 565.2MiB / 31.41GiB 1.76% 989kB / 673kB 0B / 0B 53 8c52dc87b7c9 grafana 0.04% 63.8MiB / 31.41GiB 0.20% 24.6kB / 4.72kB 0B / 26.2MB 16 fd49d548cdb7 kafka 5.80% 405MiB / 31.41GiB 1.26% 130kB / 129kB 0B / 545kB 87 884baed4b00f zookeeper 0.61% 85.96MiB / 31.41GiB 0.27% 58.2kB / 51.4kB 4.1kB / 401kB 62 e1a1f0bb11de simulator 0.20% 120.6MiB / 31.41GiB 0.37% 1.43kB / 0B 0B / 0B 77 c867c0381c03 mariadb 0.04% 102.3MiB / 31.41GiB 0.32% 970kB / 1.22MB 11.2MB / 71.8MB 31 7878f9ab5da8 prometheus 0.00% 20.47MiB / 31.41GiB 0.06% 67.3kB / 2.91kB 0B / 0B 12 Container policy-csit Creating Container policy-csit Created Attaching to policy-csit policy-csit | Invoking the robot tests from: pap-test.robot pap-slas.robot policy-csit | Run Robot test policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates policy-csit | -v POLICY_API_IP:policy-api:6969 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 policy-csit | -v APEX_IP:policy-apex-pdp:6969 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 policy-csit | -v KAFKA_IP:kafka:9092 policy-csit | -v PROMETHEUS_IP:prometheus:9090 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 policy-csit | -v TEMP_FOLDER:/tmp/distribution policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 policy-csit | -v CLAMP_K8S_TEST: policy-csit | Starting Robot test suites ... policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas.Pap-Test policy-csit | ============================================================================== policy-csit | LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | LoadNodeTemplates :: Create node templates in database using speci... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Healthcheck :: Verify policy pap health check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Consolidated Healthcheck :: Verify policy consolidated health check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Metrics :: Verify policy pap is exporting prometheus metrics | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | DeployPdpGroups :: Deploy policies in PdpGroups | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Pap-Test & Pap-Slas.Pap-Test | PASS | policy-csit | 22 tests, 22 passed, 0 failed policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas.Pap-Slas policy-csit | ============================================================================== policy-csit | WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Pap-Test & Pap-Slas.Pap-Slas | PASS | policy-csit | 8 tests, 8 passed, 0 failed policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas | PASS | policy-csit | 30 tests, 30 passed, 0 failed policy-csit | ============================================================================== policy-csit | Output: /tmp/results/output.xml policy-csit | Log: /tmp/results/log.html policy-csit | Report: /tmp/results/report.html policy-csit | RESULT: 0 policy-csit exited with code 0 NAMES STATUS policy-apex-pdp Up 2 minutes policy-pap Up 2 minutes policy-api Up 2 minutes grafana Up 2 minutes kafka Up 2 minutes zookeeper Up 2 minutes simulator Up 2 minutes mariadb Up 2 minutes prometheus Up 2 minutes Shut down started! Collecting logs from docker compose containers... ======== Logs from grafana ======== grafana | logger=settings t=2024-08-03T17:01:49.476083911Z level=info msg="Starting Grafana" version=11.1.3 commit=da5a557b6e1c3b33a5f2a4af73428ef67e949e4d branch=v11.1.x compiled=2024-08-03T17:01:49Z grafana | logger=settings t=2024-08-03T17:01:49.476497694Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini grafana | logger=settings t=2024-08-03T17:01:49.476513324Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini grafana | logger=settings t=2024-08-03T17:01:49.476517524Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" grafana | logger=settings t=2024-08-03T17:01:49.476520724Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" grafana | logger=settings t=2024-08-03T17:01:49.476523754Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" grafana | logger=settings t=2024-08-03T17:01:49.476526504Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" grafana | logger=settings t=2024-08-03T17:01:49.476529284Z level=info msg="Config overridden from command line" arg="default.log.mode=console" grafana | logger=settings t=2024-08-03T17:01:49.476532654Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" grafana | logger=settings t=2024-08-03T17:01:49.476535594Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" grafana | logger=settings t=2024-08-03T17:01:49.476542294Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" grafana | logger=settings t=2024-08-03T17:01:49.476550644Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" grafana | logger=settings t=2024-08-03T17:01:49.476554514Z level=info msg=Target target=[all] grafana | logger=settings t=2024-08-03T17:01:49.476566754Z level=info msg="Path Home" path=/usr/share/grafana grafana | logger=settings t=2024-08-03T17:01:49.476569754Z level=info msg="Path Data" path=/var/lib/grafana grafana | logger=settings t=2024-08-03T17:01:49.476572494Z level=info msg="Path Logs" path=/var/log/grafana grafana | logger=settings t=2024-08-03T17:01:49.476575544Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins grafana | logger=settings t=2024-08-03T17:01:49.476578774Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning grafana | logger=settings t=2024-08-03T17:01:49.476582164Z level=info msg="App mode production" grafana | logger=featuremgmt t=2024-08-03T17:01:49.476943556Z level=info msg=FeatureToggles awsAsyncQueryCaching=true cloudWatchNewLabelParsing=true annotationPermissionUpdate=true managedPluginsInstall=true logsInfiniteScrolling=true betterPageScrolling=true recoveryThreshold=true lokiMetricDataplane=true alertingSimplifiedRouting=true logRowsPopoverMenu=true publicDashboards=true prometheusMetricEncyclopedia=true recordedQueriesMulti=true lokiQueryHints=true exploreContentOutline=true exploreMetrics=true ssoSettingsApi=true alertingInsights=true lokiQuerySplitting=true lokiStructuredMetadata=true logsContextDatasourceUi=true correlations=true alertingNoDataErrorExecution=true influxdbBackendMigration=true angularDeprecationUI=true topnav=true dashgpt=true panelMonitoring=true dataplaneFrontendFallback=true kubernetesPlaylists=true transformationsRedesign=true cloudWatchCrossAccountQuerying=true awsDatasourcesNewFormStyling=true prometheusDataplane=true logsExploreTableVisualisation=true nestedFolders=true prometheusAzureOverrideAudience=true prometheusConfigOverhaulAuth=true grafana | logger=sqlstore t=2024-08-03T17:01:49.477002236Z level=info msg="Connecting to DB" dbtype=sqlite3 grafana | logger=sqlstore t=2024-08-03T17:01:49.477016856Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db grafana | logger=migrator t=2024-08-03T17:01:49.478574452Z level=info msg="Locking database" grafana | logger=migrator t=2024-08-03T17:01:49.478586492Z level=info msg="Starting DB migrations" grafana | logger=migrator t=2024-08-03T17:01:49.479176224Z level=info msg="Executing migration" id="create migration_log table" grafana | logger=migrator t=2024-08-03T17:01:49.480102248Z level=info msg="Migration successfully executed" id="create migration_log table" duration=925.754µs grafana | logger=migrator t=2024-08-03T17:01:49.485028778Z level=info msg="Executing migration" id="create user table" grafana | logger=migrator t=2024-08-03T17:01:49.485844411Z level=info msg="Migration successfully executed" id="create user table" duration=814.733µs grafana | logger=migrator t=2024-08-03T17:01:49.490900973Z level=info msg="Executing migration" id="add unique index user.login" grafana | logger=migrator t=2024-08-03T17:01:49.491687606Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=788.983µs grafana | logger=migrator t=2024-08-03T17:01:49.494961589Z level=info msg="Executing migration" id="add unique index user.email" grafana | logger=migrator t=2024-08-03T17:01:49.495691151Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=729.492µs grafana | logger=migrator t=2024-08-03T17:01:49.498905095Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" grafana | logger=migrator t=2024-08-03T17:01:49.499610538Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=705.503µs grafana | logger=migrator t=2024-08-03T17:01:49.505671912Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" grafana | logger=migrator t=2024-08-03T17:01:49.506375365Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=703.463µs grafana | logger=migrator t=2024-08-03T17:01:49.509873509Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" grafana | logger=migrator t=2024-08-03T17:01:49.514069696Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=4.207037ms grafana | logger=migrator t=2024-08-03T17:01:49.517874431Z level=info msg="Executing migration" id="create user table v2" grafana | logger=migrator t=2024-08-03T17:01:49.518769456Z level=info msg="Migration successfully executed" id="create user table v2" duration=894.735µs grafana | logger=migrator t=2024-08-03T17:01:49.522012979Z level=info msg="Executing migration" id="create index UQE_user_login - v2" grafana | logger=migrator t=2024-08-03T17:01:49.522903302Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=890.233µs grafana | logger=migrator t=2024-08-03T17:01:49.528227994Z level=info msg="Executing migration" id="create index UQE_user_email - v2" grafana | logger=migrator t=2024-08-03T17:01:49.529023307Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=795.313µs grafana | logger=migrator t=2024-08-03T17:01:49.532111949Z level=info msg="Executing migration" id="copy data_source v1 to v2" grafana | logger=migrator t=2024-08-03T17:01:49.532520741Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=408.582µs grafana | logger=migrator t=2024-08-03T17:01:49.536393767Z level=info msg="Executing migration" id="Drop old table user_v1" grafana | logger=migrator t=2024-08-03T17:01:49.537432451Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=1.043294ms grafana | logger=migrator t=2024-08-03T17:01:49.540763154Z level=info msg="Executing migration" id="Add column help_flags1 to user table" grafana | logger=migrator t=2024-08-03T17:01:49.542478542Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.715978ms grafana | logger=migrator t=2024-08-03T17:01:49.546552559Z level=info msg="Executing migration" id="Update user table charset" grafana | logger=migrator t=2024-08-03T17:01:49.546577239Z level=info msg="Migration successfully executed" id="Update user table charset" duration=24.98µs grafana | logger=migrator t=2024-08-03T17:01:49.551042686Z level=info msg="Executing migration" id="Add last_seen_at column to user" grafana | logger=migrator t=2024-08-03T17:01:49.551824859Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=782.103µs grafana | logger=migrator t=2024-08-03T17:01:49.554727481Z level=info msg="Executing migration" id="Add missing user data" grafana | logger=migrator t=2024-08-03T17:01:49.555157004Z level=info msg="Migration successfully executed" id="Add missing user data" duration=430.203µs grafana | logger=migrator t=2024-08-03T17:01:49.558670498Z level=info msg="Executing migration" id="Add is_disabled column to user" grafana | logger=migrator t=2024-08-03T17:01:49.560490905Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.820506ms grafana | logger=migrator t=2024-08-03T17:01:49.563894069Z level=info msg="Executing migration" id="Add index user.login/user.email" grafana | logger=migrator t=2024-08-03T17:01:49.564606221Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=711.592µs grafana | logger=migrator t=2024-08-03T17:01:49.569301021Z level=info msg="Executing migration" id="Add is_service_account column to user" grafana | logger=migrator t=2024-08-03T17:01:49.570412515Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.111924ms grafana | logger=migrator t=2024-08-03T17:01:49.573416907Z level=info msg="Executing migration" id="Update is_service_account column to nullable" grafana | logger=migrator t=2024-08-03T17:01:49.581178079Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=7.761062ms grafana | logger=migrator t=2024-08-03T17:01:49.584238881Z level=info msg="Executing migration" id="Add uid column to user" grafana | logger=migrator t=2024-08-03T17:01:49.585307375Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=1.068214ms grafana | logger=migrator t=2024-08-03T17:01:49.5887595Z level=info msg="Executing migration" id="Update uid column values for users" grafana | logger=migrator t=2024-08-03T17:01:49.588987291Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=227.861µs grafana | logger=migrator t=2024-08-03T17:01:49.594233302Z level=info msg="Executing migration" id="Add unique index user_uid" grafana | logger=migrator t=2024-08-03T17:01:49.595403547Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=1.170015ms grafana | logger=migrator t=2024-08-03T17:01:49.59862795Z level=info msg="Executing migration" id="update login field with orgid to allow for multiple service accounts with same name across orgs" grafana | logger=migrator t=2024-08-03T17:01:49.598902271Z level=info msg="Migration successfully executed" id="update login field with orgid to allow for multiple service accounts with same name across orgs" duration=274.621µs grafana | logger=migrator t=2024-08-03T17:01:49.601962603Z level=info msg="Executing migration" id="update login and email fields to lowercase" grafana | logger=migrator t=2024-08-03T17:01:49.602318774Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase" duration=362.401µs grafana | logger=migrator t=2024-08-03T17:01:49.606609732Z level=info msg="Executing migration" id="update login and email fields to lowercase2" grafana | logger=migrator t=2024-08-03T17:01:49.606870493Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase2" duration=260.801µs grafana | logger=migrator t=2024-08-03T17:01:49.610206046Z level=info msg="Executing migration" id="create temp user table v1-7" grafana | logger=migrator t=2024-08-03T17:01:49.611601673Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=1.395667ms grafana | logger=migrator t=2024-08-03T17:01:49.615048636Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" grafana | logger=migrator t=2024-08-03T17:01:49.615727009Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=678.163µs grafana | logger=migrator t=2024-08-03T17:01:49.618913082Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" grafana | logger=migrator t=2024-08-03T17:01:49.619609295Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=695.763µs grafana | logger=migrator t=2024-08-03T17:01:49.624102633Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" grafana | logger=migrator t=2024-08-03T17:01:49.625265538Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=1.162445ms grafana | logger=migrator t=2024-08-03T17:01:49.628462711Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" grafana | logger=migrator t=2024-08-03T17:01:49.629824156Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=1.361195ms grafana | logger=migrator t=2024-08-03T17:01:49.633140509Z level=info msg="Executing migration" id="Update temp_user table charset" grafana | logger=migrator t=2024-08-03T17:01:49.633169769Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=30.08µs grafana | logger=migrator t=2024-08-03T17:01:49.637873159Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" grafana | logger=migrator t=2024-08-03T17:01:49.638509531Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=636.062µs grafana | logger=migrator t=2024-08-03T17:01:49.641801386Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" grafana | logger=migrator t=2024-08-03T17:01:49.64283187Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=1.030344ms grafana | logger=migrator t=2024-08-03T17:01:49.646281773Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" grafana | logger=migrator t=2024-08-03T17:01:49.647527669Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=1.245986ms grafana | logger=migrator t=2024-08-03T17:01:49.652139037Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" grafana | logger=migrator t=2024-08-03T17:01:49.652748409Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=609.042µs grafana | logger=migrator t=2024-08-03T17:01:49.655909183Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" grafana | logger=migrator t=2024-08-03T17:01:49.659391457Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=3.480074ms grafana | logger=migrator t=2024-08-03T17:01:49.66283212Z level=info msg="Executing migration" id="create temp_user v2" grafana | logger=migrator t=2024-08-03T17:01:49.664255017Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=1.422927ms grafana | logger=migrator t=2024-08-03T17:01:49.668948415Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" grafana | logger=migrator t=2024-08-03T17:01:49.669652618Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=703.853µs grafana | logger=migrator t=2024-08-03T17:01:49.672739811Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" grafana | logger=migrator t=2024-08-03T17:01:49.673468294Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=728.213µs grafana | logger=migrator t=2024-08-03T17:01:49.676633456Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" grafana | logger=migrator t=2024-08-03T17:01:49.677800021Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=1.166295ms grafana | logger=migrator t=2024-08-03T17:01:49.682646451Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" grafana | logger=migrator t=2024-08-03T17:01:49.683377043Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=730.222µs grafana | logger=migrator t=2024-08-03T17:01:49.686309295Z level=info msg="Executing migration" id="copy temp_user v1 to v2" grafana | logger=migrator t=2024-08-03T17:01:49.686673238Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=359.373µs grafana | logger=migrator t=2024-08-03T17:01:49.689551529Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" grafana | logger=migrator t=2024-08-03T17:01:49.690012891Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=461.362µs grafana | logger=migrator t=2024-08-03T17:01:49.693265224Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" grafana | logger=migrator t=2024-08-03T17:01:49.693811346Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=545.962µs grafana | logger=migrator t=2024-08-03T17:01:49.698280835Z level=info msg="Executing migration" id="create star table" grafana | logger=migrator t=2024-08-03T17:01:49.698861597Z level=info msg="Migration successfully executed" id="create star table" duration=586.272µs grafana | logger=migrator t=2024-08-03T17:01:49.701930339Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" grafana | logger=migrator t=2024-08-03T17:01:49.702664622Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=734.313µs grafana | logger=migrator t=2024-08-03T17:01:49.705667084Z level=info msg="Executing migration" id="create org table v1" grafana | logger=migrator t=2024-08-03T17:01:49.706643889Z level=info msg="Migration successfully executed" id="create org table v1" duration=975.125µs grafana | logger=migrator t=2024-08-03T17:01:49.71181901Z level=info msg="Executing migration" id="create index UQE_org_name - v1" grafana | logger=migrator t=2024-08-03T17:01:49.713004534Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=1.185704ms grafana | logger=migrator t=2024-08-03T17:01:49.716439078Z level=info msg="Executing migration" id="create org_user table v1" grafana | logger=migrator t=2024-08-03T17:01:49.7170889Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=647.022µs grafana | logger=migrator t=2024-08-03T17:01:49.720139993Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" grafana | logger=migrator t=2024-08-03T17:01:49.720840066Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=694.493µs grafana | logger=migrator t=2024-08-03T17:01:49.723799618Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" grafana | logger=migrator t=2024-08-03T17:01:49.724511631Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=711.813µs grafana | logger=migrator t=2024-08-03T17:01:49.729319811Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" grafana | logger=migrator t=2024-08-03T17:01:49.730026063Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=706.112µs grafana | logger=migrator t=2024-08-03T17:01:49.733063795Z level=info msg="Executing migration" id="Update org table charset" grafana | logger=migrator t=2024-08-03T17:01:49.733101256Z level=info msg="Migration successfully executed" id="Update org table charset" duration=39.141µs grafana | logger=migrator t=2024-08-03T17:01:49.735697066Z level=info msg="Executing migration" id="Update org_user table charset" grafana | logger=migrator t=2024-08-03T17:01:49.735736426Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=41.15µs grafana | logger=migrator t=2024-08-03T17:01:49.7390342Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" grafana | logger=migrator t=2024-08-03T17:01:49.73923408Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=199.91µs grafana | logger=migrator t=2024-08-03T17:01:49.743672479Z level=info msg="Executing migration" id="create dashboard table" grafana | logger=migrator t=2024-08-03T17:01:49.744357252Z level=info msg="Migration successfully executed" id="create dashboard table" duration=684.833µs grafana | logger=migrator t=2024-08-03T17:01:49.747721225Z level=info msg="Executing migration" id="add index dashboard.account_id" grafana | logger=migrator t=2024-08-03T17:01:49.748926561Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=1.205086ms grafana | logger=migrator t=2024-08-03T17:01:49.752127273Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" grafana | logger=migrator t=2024-08-03T17:01:49.753397188Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=1.269655ms grafana | logger=migrator t=2024-08-03T17:01:49.756709402Z level=info msg="Executing migration" id="create dashboard_tag table" grafana | logger=migrator t=2024-08-03T17:01:49.757344095Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=634.513µs grafana | logger=migrator t=2024-08-03T17:01:49.79315991Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" grafana | logger=migrator t=2024-08-03T17:01:49.794393135Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=1.232835ms grafana | logger=migrator t=2024-08-03T17:01:49.797756128Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" grafana | logger=migrator t=2024-08-03T17:01:49.798782223Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=1.035555ms grafana | logger=migrator t=2024-08-03T17:01:49.802141937Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" grafana | logger=migrator t=2024-08-03T17:01:49.808457852Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=6.316725ms grafana | logger=migrator t=2024-08-03T17:01:49.812978911Z level=info msg="Executing migration" id="create dashboard v2" grafana | logger=migrator t=2024-08-03T17:01:49.813742234Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=763.233µs grafana | logger=migrator t=2024-08-03T17:01:49.816645105Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" grafana | logger=migrator t=2024-08-03T17:01:49.817378408Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=733.053µs grafana | logger=migrator t=2024-08-03T17:01:49.82041984Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" grafana | logger=migrator t=2024-08-03T17:01:49.821149654Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=729.414µs grafana | logger=migrator t=2024-08-03T17:01:49.825796492Z level=info msg="Executing migration" id="copy dashboard v1 to v2" grafana | logger=migrator t=2024-08-03T17:01:49.826104913Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=308.531µs grafana | logger=migrator t=2024-08-03T17:01:49.829111445Z level=info msg="Executing migration" id="drop table dashboard_v1" grafana | logger=migrator t=2024-08-03T17:01:49.829870859Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=759.234µs grafana | logger=migrator t=2024-08-03T17:01:49.833104602Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" grafana | logger=migrator t=2024-08-03T17:01:49.833198312Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=94.59µs grafana | logger=migrator t=2024-08-03T17:01:49.838856325Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" grafana | logger=migrator t=2024-08-03T17:01:49.841569877Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=2.713572ms grafana | logger=migrator t=2024-08-03T17:01:49.845017131Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" grafana | logger=migrator t=2024-08-03T17:01:49.84737735Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=2.360589ms grafana | logger=migrator t=2024-08-03T17:01:49.851684957Z level=info msg="Executing migration" id="Add column gnetId in dashboard" grafana | logger=migrator t=2024-08-03T17:01:49.853233244Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.550627ms grafana | logger=migrator t=2024-08-03T17:01:49.858702646Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" grafana | logger=migrator t=2024-08-03T17:01:49.859301478Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=599.392µs grafana | logger=migrator t=2024-08-03T17:01:49.86229665Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" grafana | logger=migrator t=2024-08-03T17:01:49.864278629Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.980469ms grafana | logger=migrator t=2024-08-03T17:01:49.868168614Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" grafana | logger=migrator t=2024-08-03T17:01:49.868953448Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=784.484µs grafana | logger=migrator t=2024-08-03T17:01:49.874775222Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" grafana | logger=migrator t=2024-08-03T17:01:49.875396724Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=621.632µs grafana | logger=migrator t=2024-08-03T17:01:49.878381256Z level=info msg="Executing migration" id="Update dashboard table charset" grafana | logger=migrator t=2024-08-03T17:01:49.878403586Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=22.88µs grafana | logger=migrator t=2024-08-03T17:01:49.881198087Z level=info msg="Executing migration" id="Update dashboard_tag table charset" grafana | logger=migrator t=2024-08-03T17:01:49.881218877Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=27.43µs grafana | logger=migrator t=2024-08-03T17:01:49.886173108Z level=info msg="Executing migration" id="Add column folder_id in dashboard" grafana | logger=migrator t=2024-08-03T17:01:49.887612813Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=1.439645ms grafana | logger=migrator t=2024-08-03T17:01:49.890591355Z level=info msg="Executing migration" id="Add column isFolder in dashboard" grafana | logger=migrator t=2024-08-03T17:01:49.892065382Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=1.473927ms grafana | logger=migrator t=2024-08-03T17:01:49.896394489Z level=info msg="Executing migration" id="Add column has_acl in dashboard" grafana | logger=migrator t=2024-08-03T17:01:49.898179007Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=1.788148ms grafana | logger=migrator t=2024-08-03T17:01:49.901215499Z level=info msg="Executing migration" id="Add column uid in dashboard" grafana | logger=migrator t=2024-08-03T17:01:49.902779766Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=1.563297ms grafana | logger=migrator t=2024-08-03T17:01:49.907989347Z level=info msg="Executing migration" id="Update uid column values in dashboard" grafana | logger=migrator t=2024-08-03T17:01:49.908244868Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=249.691µs grafana | logger=migrator t=2024-08-03T17:01:49.91124349Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" grafana | logger=migrator t=2024-08-03T17:01:49.912661575Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=1.410885ms grafana | logger=migrator t=2024-08-03T17:01:49.916161779Z level=info msg="Executing migration" id="Remove unique index org_id_slug" grafana | logger=migrator t=2024-08-03T17:01:49.917603375Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=1.441256ms grafana | logger=migrator t=2024-08-03T17:01:49.9237433Z level=info msg="Executing migration" id="Update dashboard title length" grafana | logger=migrator t=2024-08-03T17:01:49.923887071Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=143.661µs grafana | logger=migrator t=2024-08-03T17:01:49.927199014Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" grafana | logger=migrator t=2024-08-03T17:01:49.92866326Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=1.463186ms grafana | logger=migrator t=2024-08-03T17:01:49.932190294Z level=info msg="Executing migration" id="create dashboard_provisioning" grafana | logger=migrator t=2024-08-03T17:01:49.93348391Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=1.297176ms grafana | logger=migrator t=2024-08-03T17:01:49.936679383Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" grafana | logger=migrator t=2024-08-03T17:01:49.942538146Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=5.858293ms grafana | logger=migrator t=2024-08-03T17:01:49.947433037Z level=info msg="Executing migration" id="create dashboard_provisioning v2" grafana | logger=migrator t=2024-08-03T17:01:49.9482803Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=847.713µs grafana | logger=migrator t=2024-08-03T17:01:49.951076531Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" grafana | logger=migrator t=2024-08-03T17:01:49.951725804Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=649.223µs grafana | logger=migrator t=2024-08-03T17:01:49.954622646Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" grafana | logger=migrator t=2024-08-03T17:01:49.955330048Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=707.242µs grafana | logger=migrator t=2024-08-03T17:01:49.96065401Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" grafana | logger=migrator t=2024-08-03T17:01:49.961338823Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=685.373µs grafana | logger=migrator t=2024-08-03T17:01:49.964787257Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" grafana | logger=migrator t=2024-08-03T17:01:49.965849971Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=1.062684ms grafana | logger=migrator t=2024-08-03T17:01:49.970909752Z level=info msg="Executing migration" id="Add check_sum column" grafana | logger=migrator t=2024-08-03T17:01:49.974429846Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=3.554144ms grafana | logger=migrator t=2024-08-03T17:01:49.98048856Z level=info msg="Executing migration" id="Add index for dashboard_title" grafana | logger=migrator t=2024-08-03T17:01:49.981194544Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=706.114µs grafana | logger=migrator t=2024-08-03T17:01:49.983192921Z level=info msg="Executing migration" id="delete tags for deleted dashboards" grafana | logger=migrator t=2024-08-03T17:01:49.983492973Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=299.862µs grafana | logger=migrator t=2024-08-03T17:01:49.986545476Z level=info msg="Executing migration" id="delete stars for deleted dashboards" grafana | logger=migrator t=2024-08-03T17:01:49.986836507Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=290.681µs grafana | logger=migrator t=2024-08-03T17:01:49.994744829Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" grafana | logger=migrator t=2024-08-03T17:01:49.995775463Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=1.032314ms grafana | logger=migrator t=2024-08-03T17:01:49.998615795Z level=info msg="Executing migration" id="Add isPublic for dashboard" grafana | logger=migrator t=2024-08-03T17:01:50.000855184Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.238779ms grafana | logger=migrator t=2024-08-03T17:01:50.003768139Z level=info msg="Executing migration" id="Add deleted for dashboard" grafana | logger=migrator t=2024-08-03T17:01:50.005964026Z level=info msg="Migration successfully executed" id="Add deleted for dashboard" duration=2.195447ms grafana | logger=migrator t=2024-08-03T17:01:50.008588933Z level=info msg="Executing migration" id="Add index for deleted" grafana | logger=migrator t=2024-08-03T17:01:50.009461089Z level=info msg="Migration successfully executed" id="Add index for deleted" duration=872.166µs grafana | logger=migrator t=2024-08-03T17:01:50.014040078Z level=info msg="Executing migration" id="create data_source table" grafana | logger=migrator t=2024-08-03T17:01:50.015086496Z level=info msg="Migration successfully executed" id="create data_source table" duration=1.045998ms grafana | logger=migrator t=2024-08-03T17:01:50.018714069Z level=info msg="Executing migration" id="add index data_source.account_id" grafana | logger=migrator t=2024-08-03T17:01:50.019933298Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=1.216369ms grafana | logger=migrator t=2024-08-03T17:01:50.022981278Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" grafana | logger=migrator t=2024-08-03T17:01:50.024054125Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=1.072317ms grafana | logger=migrator t=2024-08-03T17:01:50.029071208Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" grafana | logger=migrator t=2024-08-03T17:01:50.029968883Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=896.945µs grafana | logger=migrator t=2024-08-03T17:01:50.032918783Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" grafana | logger=migrator t=2024-08-03T17:01:50.033795548Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=875.855µs grafana | logger=migrator t=2024-08-03T17:01:50.040145131Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" grafana | logger=migrator t=2024-08-03T17:01:50.048125743Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=7.982052ms grafana | logger=migrator t=2024-08-03T17:01:50.050907541Z level=info msg="Executing migration" id="create data_source table v2" grafana | logger=migrator t=2024-08-03T17:01:50.051770887Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=863.376µs grafana | logger=migrator t=2024-08-03T17:01:50.054404694Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" grafana | logger=migrator t=2024-08-03T17:01:50.055084528Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=679.554µs grafana | logger=migrator t=2024-08-03T17:01:50.059773569Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" grafana | logger=migrator t=2024-08-03T17:01:50.060438313Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=664.934µs grafana | logger=migrator t=2024-08-03T17:01:50.063466383Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" grafana | logger=migrator t=2024-08-03T17:01:50.06451086Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=1.044717ms grafana | logger=migrator t=2024-08-03T17:01:50.067691071Z level=info msg="Executing migration" id="Add column with_credentials" grafana | logger=migrator t=2024-08-03T17:01:50.071029793Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=3.339692ms grafana | logger=migrator t=2024-08-03T17:01:50.076044016Z level=info msg="Executing migration" id="Add secure json data column" grafana | logger=migrator t=2024-08-03T17:01:50.079301977Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=3.257681ms grafana | logger=migrator t=2024-08-03T17:01:50.113083139Z level=info msg="Executing migration" id="Update data_source table charset" grafana | logger=migrator t=2024-08-03T17:01:50.1133484Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=265.221µs grafana | logger=migrator t=2024-08-03T17:01:50.117603458Z level=info msg="Executing migration" id="Update initial version to 1" grafana | logger=migrator t=2024-08-03T17:01:50.118144352Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=540.494µs grafana | logger=migrator t=2024-08-03T17:01:50.121702055Z level=info msg="Executing migration" id="Add read_only data column" grafana | logger=migrator t=2024-08-03T17:01:50.124176072Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=2.473737ms grafana | logger=migrator t=2024-08-03T17:01:50.128980363Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" grafana | logger=migrator t=2024-08-03T17:01:50.129345846Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=365.153µs grafana | logger=migrator t=2024-08-03T17:01:50.132626247Z level=info msg="Executing migration" id="Update json_data with nulls" grafana | logger=migrator t=2024-08-03T17:01:50.132944379Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=318.242µs grafana | logger=migrator t=2024-08-03T17:01:50.135533137Z level=info msg="Executing migration" id="Add uid column" grafana | logger=migrator t=2024-08-03T17:01:50.137979103Z level=info msg="Migration successfully executed" id="Add uid column" duration=2.445506ms grafana | logger=migrator t=2024-08-03T17:01:50.144826048Z level=info msg="Executing migration" id="Update uid value" grafana | logger=migrator t=2024-08-03T17:01:50.14511445Z level=info msg="Migration successfully executed" id="Update uid value" duration=287.972µs grafana | logger=migrator t=2024-08-03T17:01:50.147623896Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" grafana | logger=migrator t=2024-08-03T17:01:50.149042806Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=1.40951ms grafana | logger=migrator t=2024-08-03T17:01:50.152780891Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" grafana | logger=migrator t=2024-08-03T17:01:50.156203823Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=3.378092ms grafana | logger=migrator t=2024-08-03T17:01:50.162443054Z level=info msg="Executing migration" id="Add is_prunable column" grafana | logger=migrator t=2024-08-03T17:01:50.16772363Z level=info msg="Migration successfully executed" id="Add is_prunable column" duration=5.281986ms grafana | logger=migrator t=2024-08-03T17:01:50.17087158Z level=info msg="Executing migration" id="Add api_version column" grafana | logger=migrator t=2024-08-03T17:01:50.173316527Z level=info msg="Migration successfully executed" id="Add api_version column" duration=2.444727ms grafana | logger=migrator t=2024-08-03T17:01:50.176348927Z level=info msg="Executing migration" id="create api_key table" grafana | logger=migrator t=2024-08-03T17:01:50.177180673Z level=info msg="Migration successfully executed" id="create api_key table" duration=831.406µs grafana | logger=migrator t=2024-08-03T17:01:50.183148782Z level=info msg="Executing migration" id="add index api_key.account_id" grafana | logger=migrator t=2024-08-03T17:01:50.184585911Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=1.436949ms grafana | logger=migrator t=2024-08-03T17:01:50.188252096Z level=info msg="Executing migration" id="add index api_key.key" grafana | logger=migrator t=2024-08-03T17:01:50.19044189Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=2.189654ms grafana | logger=migrator t=2024-08-03T17:01:50.194261666Z level=info msg="Executing migration" id="add index api_key.account_id_name" grafana | logger=migrator t=2024-08-03T17:01:50.195894036Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=1.5536ms grafana | logger=migrator t=2024-08-03T17:01:50.201591174Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" grafana | logger=migrator t=2024-08-03T17:01:50.20246979Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=878.846µs grafana | logger=migrator t=2024-08-03T17:01:50.205773862Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" grafana | logger=migrator t=2024-08-03T17:01:50.206621208Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=847.486µs grafana | logger=migrator t=2024-08-03T17:01:50.209636078Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" grafana | logger=migrator t=2024-08-03T17:01:50.210488594Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=858.656µs grafana | logger=migrator t=2024-08-03T17:01:50.215414696Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" grafana | logger=migrator t=2024-08-03T17:01:50.223000327Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=7.585121ms grafana | logger=migrator t=2024-08-03T17:01:50.226069407Z level=info msg="Executing migration" id="create api_key table v2" grafana | logger=migrator t=2024-08-03T17:01:50.226907402Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=837.725µs grafana | logger=migrator t=2024-08-03T17:01:50.22965935Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" grafana | logger=migrator t=2024-08-03T17:01:50.230554677Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=895.127µs grafana | logger=migrator t=2024-08-03T17:01:50.235524019Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" grafana | logger=migrator t=2024-08-03T17:01:50.236399995Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=876.396µs grafana | logger=migrator t=2024-08-03T17:01:50.239375085Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" grafana | logger=migrator t=2024-08-03T17:01:50.240351511Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=976.546µs grafana | logger=migrator t=2024-08-03T17:01:50.246098159Z level=info msg="Executing migration" id="copy api_key v1 to v2" grafana | logger=migrator t=2024-08-03T17:01:50.246671593Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=573.264µs grafana | logger=migrator t=2024-08-03T17:01:50.249684604Z level=info msg="Executing migration" id="Drop old table api_key_v1" grafana | logger=migrator t=2024-08-03T17:01:50.250481299Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=796.255µs grafana | logger=migrator t=2024-08-03T17:01:50.253808091Z level=info msg="Executing migration" id="Update api_key table charset" grafana | logger=migrator t=2024-08-03T17:01:50.253941162Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=135.851µs grafana | logger=migrator t=2024-08-03T17:01:50.257126103Z level=info msg="Executing migration" id="Add expires to api_key table" grafana | logger=migrator t=2024-08-03T17:01:50.25983321Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=2.713427ms grafana | logger=migrator t=2024-08-03T17:01:50.264634953Z level=info msg="Executing migration" id="Add service account foreign key" grafana | logger=migrator t=2024-08-03T17:01:50.26731889Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.683027ms grafana | logger=migrator t=2024-08-03T17:01:50.270437581Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" grafana | logger=migrator t=2024-08-03T17:01:50.270714883Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=277.122µs grafana | logger=migrator t=2024-08-03T17:01:50.274018015Z level=info msg="Executing migration" id="Add last_used_at to api_key table" grafana | logger=migrator t=2024-08-03T17:01:50.276658392Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=2.639827ms grafana | logger=migrator t=2024-08-03T17:01:50.281588355Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" grafana | logger=migrator t=2024-08-03T17:01:50.284160112Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.571297ms grafana | logger=migrator t=2024-08-03T17:01:50.287317273Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" grafana | logger=migrator t=2024-08-03T17:01:50.288126078Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=808.075µs grafana | logger=migrator t=2024-08-03T17:01:50.291287579Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" grafana | logger=migrator t=2024-08-03T17:01:50.292036894Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=744.325µs grafana | logger=migrator t=2024-08-03T17:01:50.29726315Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" grafana | logger=migrator t=2024-08-03T17:01:50.298414177Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=1.151058ms grafana | logger=migrator t=2024-08-03T17:01:50.301476057Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" grafana | logger=migrator t=2024-08-03T17:01:50.302437593Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=960.726µs grafana | logger=migrator t=2024-08-03T17:01:50.305566954Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" grafana | logger=migrator t=2024-08-03T17:01:50.30647859Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=910.867µs grafana | logger=migrator t=2024-08-03T17:01:50.312023497Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" grafana | logger=migrator t=2024-08-03T17:01:50.312951013Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=927.076µs grafana | logger=migrator t=2024-08-03T17:01:50.316196794Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" grafana | logger=migrator t=2024-08-03T17:01:50.316379855Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=182.451µs grafana | logger=migrator t=2024-08-03T17:01:50.319532286Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" grafana | logger=migrator t=2024-08-03T17:01:50.319669828Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=136.792µs grafana | logger=migrator t=2024-08-03T17:01:50.322818399Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" grafana | logger=migrator t=2024-08-03T17:01:50.331190844Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=8.371095ms grafana | logger=migrator t=2024-08-03T17:01:50.335869755Z level=info msg="Executing migration" id="Add encrypted dashboard json column" grafana | logger=migrator t=2024-08-03T17:01:50.338609843Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.739878ms grafana | logger=migrator t=2024-08-03T17:01:50.341750764Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" grafana | logger=migrator t=2024-08-03T17:01:50.341927625Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=176.651µs grafana | logger=migrator t=2024-08-03T17:01:50.345223267Z level=info msg="Executing migration" id="create quota table v1" grafana | logger=migrator t=2024-08-03T17:01:50.346037792Z level=info msg="Migration successfully executed" id="create quota table v1" duration=814.135µs grafana | logger=migrator t=2024-08-03T17:01:50.350574383Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" grafana | logger=migrator t=2024-08-03T17:01:50.35171288Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=1.138177ms grafana | logger=migrator t=2024-08-03T17:01:50.356278021Z level=info msg="Executing migration" id="Update quota table charset" grafana | logger=migrator t=2024-08-03T17:01:50.356411571Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=133.72µs grafana | logger=migrator t=2024-08-03T17:01:50.360217306Z level=info msg="Executing migration" id="create plugin_setting table" grafana | logger=migrator t=2024-08-03T17:01:50.361166622Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=845.536µs grafana | logger=migrator t=2024-08-03T17:01:50.37134375Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" grafana | logger=migrator t=2024-08-03T17:01:50.372091085Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=747.655µs grafana | logger=migrator t=2024-08-03T17:01:50.376592385Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" grafana | logger=migrator t=2024-08-03T17:01:50.385842956Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=9.250301ms grafana | logger=migrator t=2024-08-03T17:01:50.389870593Z level=info msg="Executing migration" id="Update plugin_setting table charset" grafana | logger=migrator t=2024-08-03T17:01:50.390006554Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=108.651µs grafana | logger=migrator t=2024-08-03T17:01:50.394253542Z level=info msg="Executing migration" id="create session table" grafana | logger=migrator t=2024-08-03T17:01:50.394964167Z level=info msg="Migration successfully executed" id="create session table" duration=710.514µs grafana | logger=migrator t=2024-08-03T17:01:50.398579591Z level=info msg="Executing migration" id="Drop old table playlist table" grafana | logger=migrator t=2024-08-03T17:01:50.398805132Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=224.961µs grafana | logger=migrator t=2024-08-03T17:01:50.441060833Z level=info msg="Executing migration" id="Drop old table playlist_item table" grafana | logger=migrator t=2024-08-03T17:01:50.441470415Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=409.012µs grafana | logger=migrator t=2024-08-03T17:01:50.445599562Z level=info msg="Executing migration" id="create playlist table v2" grafana | logger=migrator t=2024-08-03T17:01:50.447018412Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=1.41871ms grafana | logger=migrator t=2024-08-03T17:01:50.451914455Z level=info msg="Executing migration" id="create playlist item table v2" grafana | logger=migrator t=2024-08-03T17:01:50.45274414Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=829.095µs grafana | logger=migrator t=2024-08-03T17:01:50.456245153Z level=info msg="Executing migration" id="Update playlist table charset" grafana | logger=migrator t=2024-08-03T17:01:50.456388784Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=143.361µs grafana | logger=migrator t=2024-08-03T17:01:50.459941298Z level=info msg="Executing migration" id="Update playlist_item table charset" grafana | logger=migrator t=2024-08-03T17:01:50.460093469Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=149.231µs grafana | logger=migrator t=2024-08-03T17:01:50.46483792Z level=info msg="Executing migration" id="Add playlist column created_at" grafana | logger=migrator t=2024-08-03T17:01:50.469887934Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=5.049154ms grafana | logger=migrator t=2024-08-03T17:01:50.473633278Z level=info msg="Executing migration" id="Add playlist column updated_at" grafana | logger=migrator t=2024-08-03T17:01:50.47682398Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=3.190162ms grafana | logger=migrator t=2024-08-03T17:01:50.480292453Z level=info msg="Executing migration" id="drop preferences table v2" grafana | logger=migrator t=2024-08-03T17:01:50.480453354Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=160.991µs grafana | logger=migrator t=2024-08-03T17:01:50.483776586Z level=info msg="Executing migration" id="drop preferences table v3" grafana | logger=migrator t=2024-08-03T17:01:50.483933027Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=156.571µs grafana | logger=migrator t=2024-08-03T17:01:50.488397487Z level=info msg="Executing migration" id="create preferences table v3" grafana | logger=migrator t=2024-08-03T17:01:50.489843106Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=1.444989ms grafana | logger=migrator t=2024-08-03T17:01:50.49340746Z level=info msg="Executing migration" id="Update preferences table charset" grafana | logger=migrator t=2024-08-03T17:01:50.493634122Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=226.222µs grafana | logger=migrator t=2024-08-03T17:01:50.497323146Z level=info msg="Executing migration" id="Add column team_id in preferences" grafana | logger=migrator t=2024-08-03T17:01:50.501179491Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=3.855505ms grafana | logger=migrator t=2024-08-03T17:01:50.50555366Z level=info msg="Executing migration" id="Update team_id column values in preferences" grafana | logger=migrator t=2024-08-03T17:01:50.505844162Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=287.082µs grafana | logger=migrator t=2024-08-03T17:01:50.509314415Z level=info msg="Executing migration" id="Add column week_start in preferences" grafana | logger=migrator t=2024-08-03T17:01:50.513697954Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=4.383219ms grafana | logger=migrator t=2024-08-03T17:01:50.517037456Z level=info msg="Executing migration" id="Add column preferences.json_data" grafana | logger=migrator t=2024-08-03T17:01:50.521734738Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=4.696332ms grafana | logger=migrator t=2024-08-03T17:01:50.526322718Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" grafana | logger=migrator t=2024-08-03T17:01:50.526474049Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=91.34µs grafana | logger=migrator t=2024-08-03T17:01:50.530044803Z level=info msg="Executing migration" id="Add preferences index org_id" grafana | logger=migrator t=2024-08-03T17:01:50.530829218Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=784.175µs grafana | logger=migrator t=2024-08-03T17:01:50.53421165Z level=info msg="Executing migration" id="Add preferences index user_id" grafana | logger=migrator t=2024-08-03T17:01:50.535071006Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=860.495µs grafana | logger=migrator t=2024-08-03T17:01:50.5385703Z level=info msg="Executing migration" id="create alert table v1" grafana | logger=migrator t=2024-08-03T17:01:50.539632696Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.061936ms grafana | logger=migrator t=2024-08-03T17:01:50.545748736Z level=info msg="Executing migration" id="add index alert org_id & id " grafana | logger=migrator t=2024-08-03T17:01:50.546524551Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=775.965µs grafana | logger=migrator t=2024-08-03T17:01:50.554972268Z level=info msg="Executing migration" id="add index alert state" grafana | logger=migrator t=2024-08-03T17:01:50.556481228Z level=info msg="Migration successfully executed" id="add index alert state" duration=1.51412ms grafana | logger=migrator t=2024-08-03T17:01:50.562177706Z level=info msg="Executing migration" id="add index alert dashboard_id" grafana | logger=migrator t=2024-08-03T17:01:50.5628829Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=705.194µs grafana | logger=migrator t=2024-08-03T17:01:50.56886052Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" grafana | logger=migrator t=2024-08-03T17:01:50.569377564Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=514.224µs grafana | logger=migrator t=2024-08-03T17:01:50.573133678Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" grafana | logger=migrator t=2024-08-03T17:01:50.573811813Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=678.274µs grafana | logger=migrator t=2024-08-03T17:01:50.577704139Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" grafana | logger=migrator t=2024-08-03T17:01:50.578574104Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=870.185µs grafana | logger=migrator t=2024-08-03T17:01:50.582738442Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" grafana | logger=migrator t=2024-08-03T17:01:50.594621951Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=11.882339ms grafana | logger=migrator t=2024-08-03T17:01:50.597668641Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" grafana | logger=migrator t=2024-08-03T17:01:50.598280765Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=611.754µs grafana | logger=migrator t=2024-08-03T17:01:50.600965823Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" grafana | logger=migrator t=2024-08-03T17:01:50.60203446Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=1.068597ms grafana | logger=migrator t=2024-08-03T17:01:50.606347259Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" grafana | logger=migrator t=2024-08-03T17:01:50.607002013Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=653.954µs grafana | logger=migrator t=2024-08-03T17:01:50.610870308Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" grafana | logger=migrator t=2024-08-03T17:01:50.611694144Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=826.926µs grafana | logger=migrator t=2024-08-03T17:01:50.614763135Z level=info msg="Executing migration" id="create alert_notification table v1" grafana | logger=migrator t=2024-08-03T17:01:50.615710921Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=948.816µs grafana | logger=migrator t=2024-08-03T17:01:50.620211791Z level=info msg="Executing migration" id="Add column is_default" grafana | logger=migrator t=2024-08-03T17:01:50.623825174Z level=info msg="Migration successfully executed" id="Add column is_default" duration=3.613213ms grafana | logger=migrator t=2024-08-03T17:01:50.626775924Z level=info msg="Executing migration" id="Add column frequency" grafana | logger=migrator t=2024-08-03T17:01:50.630343207Z level=info msg="Migration successfully executed" id="Add column frequency" duration=3.566693ms grafana | logger=migrator t=2024-08-03T17:01:50.633974022Z level=info msg="Executing migration" id="Add column send_reminder" grafana | logger=migrator t=2024-08-03T17:01:50.637440125Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.464253ms grafana | logger=migrator t=2024-08-03T17:01:50.643709466Z level=info msg="Executing migration" id="Add column disable_resolve_message" grafana | logger=migrator t=2024-08-03T17:01:50.647845154Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=4.140098ms grafana | logger=migrator t=2024-08-03T17:01:50.652502844Z level=info msg="Executing migration" id="add index alert_notification org_id & name" grafana | logger=migrator t=2024-08-03T17:01:50.653628342Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=1.125328ms grafana | logger=migrator t=2024-08-03T17:01:50.657046665Z level=info msg="Executing migration" id="Update alert table charset" grafana | logger=migrator t=2024-08-03T17:01:50.657080315Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=33.8µs grafana | logger=migrator t=2024-08-03T17:01:50.660279766Z level=info msg="Executing migration" id="Update alert_notification table charset" grafana | logger=migrator t=2024-08-03T17:01:50.660307126Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=28.42µs grafana | logger=migrator t=2024-08-03T17:01:50.662991944Z level=info msg="Executing migration" id="create notification_journal table v1" grafana | logger=migrator t=2024-08-03T17:01:50.664209382Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=1.214778ms grafana | logger=migrator t=2024-08-03T17:01:50.669551287Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" grafana | logger=migrator t=2024-08-03T17:01:50.670441694Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=890.277µs grafana | logger=migrator t=2024-08-03T17:01:50.673444424Z level=info msg="Executing migration" id="drop alert_notification_journal" grafana | logger=migrator t=2024-08-03T17:01:50.674126718Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=681.474µs grafana | logger=migrator t=2024-08-03T17:01:50.677383949Z level=info msg="Executing migration" id="create alert_notification_state table v1" grafana | logger=migrator t=2024-08-03T17:01:50.678158114Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=772.375µs grafana | logger=migrator t=2024-08-03T17:01:50.682229742Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" grafana | logger=migrator t=2024-08-03T17:01:50.683148818Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=918.476µs grafana | logger=migrator t=2024-08-03T17:01:50.686381949Z level=info msg="Executing migration" id="Add for to alert table" grafana | logger=migrator t=2024-08-03T17:01:50.692184887Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=5.801858ms grafana | logger=migrator t=2024-08-03T17:01:50.697808805Z level=info msg="Executing migration" id="Add column uid in alert_notification" grafana | logger=migrator t=2024-08-03T17:01:50.707138917Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=9.330022ms grafana | logger=migrator t=2024-08-03T17:01:50.712606984Z level=info msg="Executing migration" id="Update uid column values in alert_notification" grafana | logger=migrator t=2024-08-03T17:01:50.712744084Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=137.42µs grafana | logger=migrator t=2024-08-03T17:01:50.71661066Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" grafana | logger=migrator t=2024-08-03T17:01:50.717553246Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=942.456µs grafana | logger=migrator t=2024-08-03T17:01:50.765133732Z level=info msg="Executing migration" id="Remove unique index org_id_name" grafana | logger=migrator t=2024-08-03T17:01:50.766102138Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=971.386µs grafana | logger=migrator t=2024-08-03T17:01:50.770514997Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" grafana | logger=migrator t=2024-08-03T17:01:50.773763109Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=3.252542ms grafana | logger=migrator t=2024-08-03T17:01:50.776690469Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" grafana | logger=migrator t=2024-08-03T17:01:50.776742119Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=52.47µs grafana | logger=migrator t=2024-08-03T17:01:50.781110388Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" grafana | logger=migrator t=2024-08-03T17:01:50.781727831Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=619.624µs grafana | logger=migrator t=2024-08-03T17:01:50.784293858Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" grafana | logger=migrator t=2024-08-03T17:01:50.784879203Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=585.305µs grafana | logger=migrator t=2024-08-03T17:01:50.787359059Z level=info msg="Executing migration" id="Drop old annotation table v4" grafana | logger=migrator t=2024-08-03T17:01:50.787438929Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=79.631µs grafana | logger=migrator t=2024-08-03T17:01:50.789871155Z level=info msg="Executing migration" id="create annotation table v5" grafana | logger=migrator t=2024-08-03T17:01:50.79048961Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=618.105µs grafana | logger=migrator t=2024-08-03T17:01:50.795883716Z level=info msg="Executing migration" id="add index annotation 0 v3" grafana | logger=migrator t=2024-08-03T17:01:50.79651593Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=634.544µs grafana | logger=migrator t=2024-08-03T17:01:50.799184678Z level=info msg="Executing migration" id="add index annotation 1 v3" grafana | logger=migrator t=2024-08-03T17:01:50.799773821Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=589.343µs grafana | logger=migrator t=2024-08-03T17:01:50.802358719Z level=info msg="Executing migration" id="add index annotation 2 v3" grafana | logger=migrator t=2024-08-03T17:01:50.802986233Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=627.324µs grafana | logger=migrator t=2024-08-03T17:01:50.807914625Z level=info msg="Executing migration" id="add index annotation 3 v3" grafana | logger=migrator t=2024-08-03T17:01:50.80859063Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=675.675µs grafana | logger=migrator t=2024-08-03T17:01:50.811169407Z level=info msg="Executing migration" id="add index annotation 4 v3" grafana | logger=migrator t=2024-08-03T17:01:50.811824841Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=655.774µs grafana | logger=migrator t=2024-08-03T17:01:50.816365022Z level=info msg="Executing migration" id="Update annotation table charset" grafana | logger=migrator t=2024-08-03T17:01:50.816387182Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=24.32µs grafana | logger=migrator t=2024-08-03T17:01:50.818434355Z level=info msg="Executing migration" id="Add column region_id to annotation table" grafana | logger=migrator t=2024-08-03T17:01:50.821336715Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=2.90031ms grafana | logger=migrator t=2024-08-03T17:01:50.824000983Z level=info msg="Executing migration" id="Drop category_id index" grafana | logger=migrator t=2024-08-03T17:01:50.824598016Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=597.193µs grafana | logger=migrator t=2024-08-03T17:01:50.8296651Z level=info msg="Executing migration" id="Add column tags to annotation table" grafana | logger=migrator t=2024-08-03T17:01:50.832448118Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=2.780818ms grafana | logger=migrator t=2024-08-03T17:01:50.834931434Z level=info msg="Executing migration" id="Create annotation_tag table v2" grafana | logger=migrator t=2024-08-03T17:01:50.835438298Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=508.524µs grafana | logger=migrator t=2024-08-03T17:01:50.838262537Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" grafana | logger=migrator t=2024-08-03T17:01:50.838896731Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=633.984µs grafana | logger=migrator t=2024-08-03T17:01:50.844291817Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" grafana | logger=migrator t=2024-08-03T17:01:50.84486995Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=578.553µs grafana | logger=migrator t=2024-08-03T17:01:50.846736093Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" grafana | logger=migrator t=2024-08-03T17:01:50.854883267Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=8.146504ms grafana | logger=migrator t=2024-08-03T17:01:50.857669296Z level=info msg="Executing migration" id="Create annotation_tag table v3" grafana | logger=migrator t=2024-08-03T17:01:50.858220149Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=550.433µs grafana | logger=migrator t=2024-08-03T17:01:50.863221732Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" grafana | logger=migrator t=2024-08-03T17:01:50.863952507Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=730.455µs grafana | logger=migrator t=2024-08-03T17:01:50.866572695Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" grafana | logger=migrator t=2024-08-03T17:01:50.866828376Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=255.021µs grafana | logger=migrator t=2024-08-03T17:01:50.870050008Z level=info msg="Executing migration" id="drop table annotation_tag_v2" grafana | logger=migrator t=2024-08-03T17:01:50.87046524Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=415.052µs grafana | logger=migrator t=2024-08-03T17:01:50.875730875Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" grafana | logger=migrator t=2024-08-03T17:01:50.875887916Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=157.401µs grafana | logger=migrator t=2024-08-03T17:01:50.879368869Z level=info msg="Executing migration" id="Add created time to annotation table" grafana | logger=migrator t=2024-08-03T17:01:50.882222838Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=2.853549ms grafana | logger=migrator t=2024-08-03T17:01:50.884896566Z level=info msg="Executing migration" id="Add updated time to annotation table" grafana | logger=migrator t=2024-08-03T17:01:50.887830135Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=2.933219ms grafana | logger=migrator t=2024-08-03T17:01:50.890503463Z level=info msg="Executing migration" id="Add index for created in annotation table" grafana | logger=migrator t=2024-08-03T17:01:50.891237559Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=735.085µs grafana | logger=migrator t=2024-08-03T17:01:50.895932359Z level=info msg="Executing migration" id="Add index for updated in annotation table" grafana | logger=migrator t=2024-08-03T17:01:50.896621843Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=689.014µs grafana | logger=migrator t=2024-08-03T17:01:50.89914628Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" grafana | logger=migrator t=2024-08-03T17:01:50.899374341Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=227.661µs grafana | logger=migrator t=2024-08-03T17:01:50.905242131Z level=info msg="Executing migration" id="Add epoch_end column" grafana | logger=migrator t=2024-08-03T17:01:50.908366052Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=3.125721ms grafana | logger=migrator t=2024-08-03T17:01:50.913581877Z level=info msg="Executing migration" id="Add index for epoch_end" grafana | logger=migrator t=2024-08-03T17:01:50.914283841Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=701.874µs grafana | logger=migrator t=2024-08-03T17:01:50.917348961Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" grafana | logger=migrator t=2024-08-03T17:01:50.917472142Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=123.561µs grafana | logger=migrator t=2024-08-03T17:01:50.919443345Z level=info msg="Executing migration" id="Move region to single row" grafana | logger=migrator t=2024-08-03T17:01:50.919739327Z level=info msg="Migration successfully executed" id="Move region to single row" duration=295.942µs grafana | logger=migrator t=2024-08-03T17:01:50.923297071Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" grafana | logger=migrator t=2024-08-03T17:01:50.923942045Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=645.054µs grafana | logger=migrator t=2024-08-03T17:01:50.92920469Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" grafana | logger=migrator t=2024-08-03T17:01:50.929854474Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=648.034µs grafana | logger=migrator t=2024-08-03T17:01:50.932579922Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" grafana | logger=migrator t=2024-08-03T17:01:50.933272277Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=691.535µs grafana | logger=migrator t=2024-08-03T17:01:50.938128439Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" grafana | logger=migrator t=2024-08-03T17:01:50.939000075Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=871.086µs grafana | logger=migrator t=2024-08-03T17:01:50.942038155Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" grafana | logger=migrator t=2024-08-03T17:01:50.94275189Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=713.735µs grafana | logger=migrator t=2024-08-03T17:01:50.946063352Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" grafana | logger=migrator t=2024-08-03T17:01:50.946885817Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=823.426µs grafana | logger=migrator t=2024-08-03T17:01:50.951572978Z level=info msg="Executing migration" id="Increase tags column to length 4096" grafana | logger=migrator t=2024-08-03T17:01:50.951629498Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=56.7µs grafana | logger=migrator t=2024-08-03T17:01:50.954418337Z level=info msg="Executing migration" id="create test_data table" grafana | logger=migrator t=2024-08-03T17:01:50.955244342Z level=info msg="Migration successfully executed" id="create test_data table" duration=825.895µs grafana | logger=migrator t=2024-08-03T17:01:50.958263642Z level=info msg="Executing migration" id="create dashboard_version table v1" grafana | logger=migrator t=2024-08-03T17:01:50.959050218Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=786.586µs grafana | logger=migrator t=2024-08-03T17:01:50.963819019Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" grafana | logger=migrator t=2024-08-03T17:01:50.964672605Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=853.806µs grafana | logger=migrator t=2024-08-03T17:01:50.969169805Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" grafana | logger=migrator t=2024-08-03T17:01:50.970241102Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=1.078247ms grafana | logger=migrator t=2024-08-03T17:01:50.973373083Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" grafana | logger=migrator t=2024-08-03T17:01:50.973563494Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=190.161µs grafana | logger=migrator t=2024-08-03T17:01:50.979446713Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" grafana | logger=migrator t=2024-08-03T17:01:50.980216038Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=768.675µs grafana | logger=migrator t=2024-08-03T17:01:50.987455476Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" grafana | logger=migrator t=2024-08-03T17:01:50.987620727Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=143.841µs grafana | logger=migrator t=2024-08-03T17:01:50.99246142Z level=info msg="Executing migration" id="create team table" grafana | logger=migrator t=2024-08-03T17:01:50.993643247Z level=info msg="Migration successfully executed" id="create team table" duration=1.190987ms grafana | logger=migrator t=2024-08-03T17:01:51.001739951Z level=info msg="Executing migration" id="add index team.org_id" grafana | logger=migrator t=2024-08-03T17:01:51.003341332Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.608671ms grafana | logger=migrator t=2024-08-03T17:01:51.008936969Z level=info msg="Executing migration" id="add unique index team_org_id_name" grafana | logger=migrator t=2024-08-03T17:01:51.010255617Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.318568ms grafana | logger=migrator t=2024-08-03T17:01:51.014657877Z level=info msg="Executing migration" id="Add column uid in team" grafana | logger=migrator t=2024-08-03T17:01:51.021585602Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=6.933465ms grafana | logger=migrator t=2024-08-03T17:01:51.026470465Z level=info msg="Executing migration" id="Update uid column values in team" grafana | logger=migrator t=2024-08-03T17:01:51.026724516Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=253.611µs grafana | logger=migrator t=2024-08-03T17:01:51.031343127Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" grafana | logger=migrator t=2024-08-03T17:01:51.032529494Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.186717ms grafana | logger=migrator t=2024-08-03T17:01:51.038587084Z level=info msg="Executing migration" id="create team member table" grafana | logger=migrator t=2024-08-03T17:01:51.039961744Z level=info msg="Migration successfully executed" id="create team member table" duration=1.37355ms grafana | logger=migrator t=2024-08-03T17:01:51.05011849Z level=info msg="Executing migration" id="add index team_member.org_id" grafana | logger=migrator t=2024-08-03T17:01:51.051661341Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.541991ms grafana | logger=migrator t=2024-08-03T17:01:51.056858325Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" grafana | logger=migrator t=2024-08-03T17:01:51.058363965Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=1.50267ms grafana | logger=migrator t=2024-08-03T17:01:51.0621293Z level=info msg="Executing migration" id="add index team_member.team_id" grafana | logger=migrator t=2024-08-03T17:01:51.063084016Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=954.146µs grafana | logger=migrator t=2024-08-03T17:01:51.068970585Z level=info msg="Executing migration" id="Add column email to team table" grafana | logger=migrator t=2024-08-03T17:01:51.073503825Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=4.53265ms grafana | logger=migrator t=2024-08-03T17:01:51.078892881Z level=info msg="Executing migration" id="Add column external to team_member table" grafana | logger=migrator t=2024-08-03T17:01:51.083476901Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=4.58299ms grafana | logger=migrator t=2024-08-03T17:01:51.088750635Z level=info msg="Executing migration" id="Add column permission to team_member table" grafana | logger=migrator t=2024-08-03T17:01:51.093275226Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=4.523521ms grafana | logger=migrator t=2024-08-03T17:01:51.098560701Z level=info msg="Executing migration" id="create dashboard acl table" grafana | logger=migrator t=2024-08-03T17:01:51.099515747Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=956.266µs grafana | logger=migrator t=2024-08-03T17:01:51.104001986Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" grafana | logger=migrator t=2024-08-03T17:01:51.105427436Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.4244ms grafana | logger=migrator t=2024-08-03T17:01:51.110119727Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" grafana | logger=migrator t=2024-08-03T17:01:51.111808378Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.688451ms grafana | logger=migrator t=2024-08-03T17:01:51.116509439Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" grafana | logger=migrator t=2024-08-03T17:01:51.117462295Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=954.786µs grafana | logger=migrator t=2024-08-03T17:01:51.123074882Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" grafana | logger=migrator t=2024-08-03T17:01:51.124680843Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=1.605911ms grafana | logger=migrator t=2024-08-03T17:01:51.130659542Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" grafana | logger=migrator t=2024-08-03T17:01:51.131617498Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=957.726µs grafana | logger=migrator t=2024-08-03T17:01:51.135494354Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" grafana | logger=migrator t=2024-08-03T17:01:51.136724222Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=1.229038ms grafana | logger=migrator t=2024-08-03T17:01:51.140464897Z level=info msg="Executing migration" id="add index dashboard_permission" grafana | logger=migrator t=2024-08-03T17:01:51.141947396Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.482769ms grafana | logger=migrator t=2024-08-03T17:01:51.145912232Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" grafana | logger=migrator t=2024-08-03T17:01:51.146430177Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=517.665µs grafana | logger=migrator t=2024-08-03T17:01:51.150533223Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" grafana | logger=migrator t=2024-08-03T17:01:51.150842015Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=308.332µs grafana | logger=migrator t=2024-08-03T17:01:51.154171007Z level=info msg="Executing migration" id="create tag table" grafana | logger=migrator t=2024-08-03T17:01:51.155375226Z level=info msg="Migration successfully executed" id="create tag table" duration=1.203229ms grafana | logger=migrator t=2024-08-03T17:01:51.160940672Z level=info msg="Executing migration" id="add index tag.key_value" grafana | logger=migrator t=2024-08-03T17:01:51.162465452Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.524869ms grafana | logger=migrator t=2024-08-03T17:01:51.165948725Z level=info msg="Executing migration" id="create login attempt table" grafana | logger=migrator t=2024-08-03T17:01:51.167253774Z level=info msg="Migration successfully executed" id="create login attempt table" duration=1.304649ms grafana | logger=migrator t=2024-08-03T17:01:51.171707013Z level=info msg="Executing migration" id="add index login_attempt.username" grafana | logger=migrator t=2024-08-03T17:01:51.172681569Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=974.426µs grafana | logger=migrator t=2024-08-03T17:01:51.177378541Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" grafana | logger=migrator t=2024-08-03T17:01:51.178309377Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=928.976µs grafana | logger=migrator t=2024-08-03T17:01:51.183173019Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" grafana | logger=migrator t=2024-08-03T17:01:51.197242132Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=14.067173ms grafana | logger=migrator t=2024-08-03T17:01:51.202447856Z level=info msg="Executing migration" id="create login_attempt v2" grafana | logger=migrator t=2024-08-03T17:01:51.20307687Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=605.264µs grafana | logger=migrator t=2024-08-03T17:01:51.207249127Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" grafana | logger=migrator t=2024-08-03T17:01:51.208223204Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=974.477µs grafana | logger=migrator t=2024-08-03T17:01:51.212269821Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" grafana | logger=migrator t=2024-08-03T17:01:51.212963145Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=694.674µs grafana | logger=migrator t=2024-08-03T17:01:51.218845014Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" grafana | logger=migrator t=2024-08-03T17:01:51.220014292Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=1.168288ms grafana | logger=migrator t=2024-08-03T17:01:51.225357227Z level=info msg="Executing migration" id="create user auth table" grafana | logger=migrator t=2024-08-03T17:01:51.226346053Z level=info msg="Migration successfully executed" id="create user auth table" duration=988.626µs grafana | logger=migrator t=2024-08-03T17:01:51.230795163Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" grafana | logger=migrator t=2024-08-03T17:01:51.232014581Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.219358ms grafana | logger=migrator t=2024-08-03T17:01:51.235838347Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" grafana | logger=migrator t=2024-08-03T17:01:51.235972668Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=136.79µs grafana | logger=migrator t=2024-08-03T17:01:51.241758276Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" grafana | logger=migrator t=2024-08-03T17:01:51.250054641Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=8.297225ms grafana | logger=migrator t=2024-08-03T17:01:51.253438343Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" grafana | logger=migrator t=2024-08-03T17:01:51.258631137Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=5.192344ms grafana | logger=migrator t=2024-08-03T17:01:51.262452192Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" grafana | logger=migrator t=2024-08-03T17:01:51.270611596Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=8.158514ms grafana | logger=migrator t=2024-08-03T17:01:51.27885442Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" grafana | logger=migrator t=2024-08-03T17:01:51.285524994Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=6.667374ms grafana | logger=migrator t=2024-08-03T17:01:51.290067764Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" grafana | logger=migrator t=2024-08-03T17:01:51.291221241Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=1.158007ms grafana | logger=migrator t=2024-08-03T17:01:51.295727102Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" grafana | logger=migrator t=2024-08-03T17:01:51.300871536Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=5.144074ms grafana | logger=migrator t=2024-08-03T17:01:51.305539196Z level=info msg="Executing migration" id="create server_lock table" grafana | logger=migrator t=2024-08-03T17:01:51.306410572Z level=info msg="Migration successfully executed" id="create server_lock table" duration=870.576µs grafana | logger=migrator t=2024-08-03T17:01:51.312580173Z level=info msg="Executing migration" id="add index server_lock.operation_uid" grafana | logger=migrator t=2024-08-03T17:01:51.313589329Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=1.008556ms grafana | logger=migrator t=2024-08-03T17:01:51.319693989Z level=info msg="Executing migration" id="create user auth token table" grafana | logger=migrator t=2024-08-03T17:01:51.320598026Z level=info msg="Migration successfully executed" id="create user auth token table" duration=900.217µs grafana | logger=migrator t=2024-08-03T17:01:51.324961204Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" grafana | logger=migrator t=2024-08-03T17:01:51.325922761Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=961.657µs grafana | logger=migrator t=2024-08-03T17:01:51.340238396Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" grafana | logger=migrator t=2024-08-03T17:01:51.341277902Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.041436ms grafana | logger=migrator t=2024-08-03T17:01:51.346745028Z level=info msg="Executing migration" id="add index user_auth_token.user_id" grafana | logger=migrator t=2024-08-03T17:01:51.347776946Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.028888ms grafana | logger=migrator t=2024-08-03T17:01:51.352777148Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" grafana | logger=migrator t=2024-08-03T17:01:51.359114821Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=6.337153ms grafana | logger=migrator t=2024-08-03T17:01:51.364116063Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" grafana | logger=migrator t=2024-08-03T17:01:51.364800238Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=682.825µs grafana | logger=migrator t=2024-08-03T17:01:51.36821675Z level=info msg="Executing migration" id="create cache_data table" grafana | logger=migrator t=2024-08-03T17:01:51.368820204Z level=info msg="Migration successfully executed" id="create cache_data table" duration=603.264µs grafana | logger=migrator t=2024-08-03T17:01:51.372292047Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" grafana | logger=migrator t=2024-08-03T17:01:51.373984368Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=1.692151ms grafana | logger=migrator t=2024-08-03T17:01:51.384484928Z level=info msg="Executing migration" id="create short_url table v1" grafana | logger=migrator t=2024-08-03T17:01:51.385639265Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=1.153727ms grafana | logger=migrator t=2024-08-03T17:01:51.394983356Z level=info msg="Executing migration" id="add index short_url.org_id-uid" grafana | logger=migrator t=2024-08-03T17:01:51.395741492Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=758.266µs grafana | logger=migrator t=2024-08-03T17:01:51.39849735Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" grafana | logger=migrator t=2024-08-03T17:01:51.39855647Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=58.99µs grafana | logger=migrator t=2024-08-03T17:01:51.402572027Z level=info msg="Executing migration" id="delete alert_definition table" grafana | logger=migrator t=2024-08-03T17:01:51.402679107Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=107.35µs grafana | logger=migrator t=2024-08-03T17:01:51.406368331Z level=info msg="Executing migration" id="recreate alert_definition table" grafana | logger=migrator t=2024-08-03T17:01:51.407319428Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=950.097µs grafana | logger=migrator t=2024-08-03T17:01:51.411650017Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" grafana | logger=migrator t=2024-08-03T17:01:51.412620384Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=969.607µs grafana | logger=migrator t=2024-08-03T17:01:51.419576439Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2024-08-03T17:01:51.420538475Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=962.156µs grafana | logger=migrator t=2024-08-03T17:01:51.423672167Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" grafana | logger=migrator t=2024-08-03T17:01:51.423754517Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=85.63µs grafana | logger=migrator t=2024-08-03T17:01:51.428199326Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" grafana | logger=migrator t=2024-08-03T17:01:51.429327714Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.128428ms grafana | logger=migrator t=2024-08-03T17:01:51.433607452Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2024-08-03T17:01:51.434480867Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=871.055µs grafana | logger=migrator t=2024-08-03T17:01:51.440991671Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" grafana | logger=migrator t=2024-08-03T17:01:51.441695565Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=703.714µs grafana | logger=migrator t=2024-08-03T17:01:51.445881213Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2024-08-03T17:01:51.446584478Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=702.605µs grafana | logger=migrator t=2024-08-03T17:01:51.451776482Z level=info msg="Executing migration" id="Add column paused in alert_definition" grafana | logger=migrator t=2024-08-03T17:01:51.457402459Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=5.625317ms grafana | logger=migrator t=2024-08-03T17:01:51.461625777Z level=info msg="Executing migration" id="drop alert_definition table" grafana | logger=migrator t=2024-08-03T17:01:51.462785735Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=1.159048ms grafana | logger=migrator t=2024-08-03T17:01:51.465696824Z level=info msg="Executing migration" id="delete alert_definition_version table" grafana | logger=migrator t=2024-08-03T17:01:51.465779105Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=82.371µs grafana | logger=migrator t=2024-08-03T17:01:51.470855197Z level=info msg="Executing migration" id="recreate alert_definition_version table" grafana | logger=migrator t=2024-08-03T17:01:51.472024185Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.168298ms grafana | logger=migrator t=2024-08-03T17:01:51.479080951Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" grafana | logger=migrator t=2024-08-03T17:01:51.480563872Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.484461ms grafana | logger=migrator t=2024-08-03T17:01:51.486048978Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" grafana | logger=migrator t=2024-08-03T17:01:51.487159865Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.109937ms grafana | logger=migrator t=2024-08-03T17:01:51.492671981Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" grafana | logger=migrator t=2024-08-03T17:01:51.492770572Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=98.331µs grafana | logger=migrator t=2024-08-03T17:01:51.498663531Z level=info msg="Executing migration" id="drop alert_definition_version table" grafana | logger=migrator t=2024-08-03T17:01:51.49995692Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.292169ms grafana | logger=migrator t=2024-08-03T17:01:51.504969723Z level=info msg="Executing migration" id="create alert_instance table" grafana | logger=migrator t=2024-08-03T17:01:51.505975659Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.004116ms grafana | logger=migrator t=2024-08-03T17:01:51.51213816Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" grafana | logger=migrator t=2024-08-03T17:01:51.513267538Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.128538ms grafana | logger=migrator t=2024-08-03T17:01:51.520198193Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" grafana | logger=migrator t=2024-08-03T17:01:51.521704634Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.505661ms grafana | logger=migrator t=2024-08-03T17:01:51.526004312Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" grafana | logger=migrator t=2024-08-03T17:01:51.536065638Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=10.049626ms grafana | logger=migrator t=2024-08-03T17:01:51.541718156Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" grafana | logger=migrator t=2024-08-03T17:01:51.543001424Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=1.280578ms grafana | logger=migrator t=2024-08-03T17:01:51.548015847Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" grafana | logger=migrator t=2024-08-03T17:01:51.549430446Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=1.413949ms grafana | logger=migrator t=2024-08-03T17:01:51.555419606Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" grafana | logger=migrator t=2024-08-03T17:01:51.589242789Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=33.822073ms grafana | logger=migrator t=2024-08-03T17:01:51.594287733Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" grafana | logger=migrator t=2024-08-03T17:01:51.622180296Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=27.891743ms grafana | logger=migrator t=2024-08-03T17:01:51.649084214Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" grafana | logger=migrator t=2024-08-03T17:01:51.650624284Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=1.53909ms grafana | logger=migrator t=2024-08-03T17:01:51.655706617Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" grafana | logger=migrator t=2024-08-03T17:01:51.656836065Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=1.128658ms grafana | logger=migrator t=2024-08-03T17:01:51.661792158Z level=info msg="Executing migration" id="add current_reason column related to current_state" grafana | logger=migrator t=2024-08-03T17:01:51.669866891Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=8.074523ms grafana | logger=migrator t=2024-08-03T17:01:51.675474288Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" grafana | logger=migrator t=2024-08-03T17:01:51.683287769Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=7.790571ms grafana | logger=migrator t=2024-08-03T17:01:51.687500718Z level=info msg="Executing migration" id="create alert_rule table" grafana | logger=migrator t=2024-08-03T17:01:51.688365073Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=863.155µs grafana | logger=migrator t=2024-08-03T17:01:51.693083515Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" grafana | logger=migrator t=2024-08-03T17:01:51.694527694Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.443118ms grafana | logger=migrator t=2024-08-03T17:01:51.698820383Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" grafana | logger=migrator t=2024-08-03T17:01:51.70007047Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.249267ms grafana | logger=migrator t=2024-08-03T17:01:51.705407116Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" grafana | logger=migrator t=2024-08-03T17:01:51.707357509Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.949103ms grafana | logger=migrator t=2024-08-03T17:01:51.711914869Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" grafana | logger=migrator t=2024-08-03T17:01:51.71206313Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=152.022µs grafana | logger=migrator t=2024-08-03T17:01:51.71813455Z level=info msg="Executing migration" id="add column for to alert_rule" grafana | logger=migrator t=2024-08-03T17:01:51.72269755Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=4.56331ms grafana | logger=migrator t=2024-08-03T17:01:51.726662957Z level=info msg="Executing migration" id="add column annotations to alert_rule" grafana | logger=migrator t=2024-08-03T17:01:51.730779143Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=4.116086ms grafana | logger=migrator t=2024-08-03T17:01:51.736455741Z level=info msg="Executing migration" id="add column labels to alert_rule" grafana | logger=migrator t=2024-08-03T17:01:51.740586948Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=4.130907ms grafana | logger=migrator t=2024-08-03T17:01:51.747614614Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" grafana | logger=migrator t=2024-08-03T17:01:51.748532341Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=917.147µs grafana | logger=migrator t=2024-08-03T17:01:51.754064247Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" grafana | logger=migrator t=2024-08-03T17:01:51.754747372Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=683.105µs grafana | logger=migrator t=2024-08-03T17:01:51.758959999Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" grafana | logger=migrator t=2024-08-03T17:01:51.763173807Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=4.213748ms grafana | logger=migrator t=2024-08-03T17:01:51.769080956Z level=info msg="Executing migration" id="add panel_id column to alert_rule" grafana | logger=migrator t=2024-08-03T17:01:51.773229613Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=4.154327ms grafana | logger=migrator t=2024-08-03T17:01:51.77729687Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" grafana | logger=migrator t=2024-08-03T17:01:51.777969475Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=671.805µs grafana | logger=migrator t=2024-08-03T17:01:51.782093063Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" grafana | logger=migrator t=2024-08-03T17:01:51.78629872Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=4.205077ms grafana | logger=migrator t=2024-08-03T17:01:51.791724966Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" grafana | logger=migrator t=2024-08-03T17:01:51.796180005Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=4.455519ms grafana | logger=migrator t=2024-08-03T17:01:51.801159748Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" grafana | logger=migrator t=2024-08-03T17:01:51.801208188Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=47.51µs grafana | logger=migrator t=2024-08-03T17:01:51.804917203Z level=info msg="Executing migration" id="create alert_rule_version table" grafana | logger=migrator t=2024-08-03T17:01:51.80601377Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.096187ms grafana | logger=migrator t=2024-08-03T17:01:51.812019189Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" grafana | logger=migrator t=2024-08-03T17:01:51.812730104Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=710.365µs grafana | logger=migrator t=2024-08-03T17:01:51.81662114Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" grafana | logger=migrator t=2024-08-03T17:01:51.817390075Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=768.115µs grafana | logger=migrator t=2024-08-03T17:01:51.822582099Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" grafana | logger=migrator t=2024-08-03T17:01:51.8226289Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=45.971µs grafana | logger=migrator t=2024-08-03T17:01:51.827933445Z level=info msg="Executing migration" id="add column for to alert_rule_version" grafana | logger=migrator t=2024-08-03T17:01:51.832932738Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=5.001723ms grafana | logger=migrator t=2024-08-03T17:01:51.837264216Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" grafana | logger=migrator t=2024-08-03T17:01:51.841609665Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=4.344899ms grafana | logger=migrator t=2024-08-03T17:01:51.846347056Z level=info msg="Executing migration" id="add column labels to alert_rule_version" grafana | logger=migrator t=2024-08-03T17:01:51.850668485Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=4.320929ms grafana | logger=migrator t=2024-08-03T17:01:51.85593234Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" grafana | logger=migrator t=2024-08-03T17:01:51.860398579Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=4.466169ms grafana | logger=migrator t=2024-08-03T17:01:51.864169114Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" grafana | logger=migrator t=2024-08-03T17:01:51.868563132Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=4.395358ms grafana | logger=migrator t=2024-08-03T17:01:51.873038622Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" grafana | logger=migrator t=2024-08-03T17:01:51.873089163Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=51.321µs grafana | logger=migrator t=2024-08-03T17:01:51.877551283Z level=info msg="Executing migration" id=create_alert_configuration_table grafana | logger=migrator t=2024-08-03T17:01:51.878101646Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=550.003µs grafana | logger=migrator t=2024-08-03T17:01:51.881892471Z level=info msg="Executing migration" id="Add column default in alert_configuration" grafana | logger=migrator t=2024-08-03T17:01:51.88630949Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=4.415159ms grafana | logger=migrator t=2024-08-03T17:01:51.891947617Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" grafana | logger=migrator t=2024-08-03T17:01:51.891996528Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=48.641µs grafana | logger=migrator t=2024-08-03T17:01:51.897076291Z level=info msg="Executing migration" id="add column org_id in alert_configuration" grafana | logger=migrator t=2024-08-03T17:01:51.90146505Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=4.388179ms grafana | logger=migrator t=2024-08-03T17:01:51.906249232Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" grafana | logger=migrator t=2024-08-03T17:01:51.906922166Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=672.464µs grafana | logger=migrator t=2024-08-03T17:01:51.911698318Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" grafana | logger=migrator t=2024-08-03T17:01:51.916115736Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=4.417128ms grafana | logger=migrator t=2024-08-03T17:01:51.921079809Z level=info msg="Executing migration" id=create_ngalert_configuration_table grafana | logger=migrator t=2024-08-03T17:01:51.921658323Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=578.544µs grafana | logger=migrator t=2024-08-03T17:01:51.926461905Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" grafana | logger=migrator t=2024-08-03T17:01:51.927172069Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=711.234µs grafana | logger=migrator t=2024-08-03T17:01:51.933455472Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" grafana | logger=migrator t=2024-08-03T17:01:51.940733249Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=7.277167ms grafana | logger=migrator t=2024-08-03T17:01:51.94541516Z level=info msg="Executing migration" id="create provenance_type table" grafana | logger=migrator t=2024-08-03T17:01:51.946227645Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=813.905µs grafana | logger=migrator t=2024-08-03T17:01:51.951906343Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" grafana | logger=migrator t=2024-08-03T17:01:51.95294524Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.038847ms grafana | logger=migrator t=2024-08-03T17:01:51.958304625Z level=info msg="Executing migration" id="create alert_image table" grafana | logger=migrator t=2024-08-03T17:01:51.959158311Z level=info msg="Migration successfully executed" id="create alert_image table" duration=853.416µs grafana | logger=migrator t=2024-08-03T17:01:51.963066947Z level=info msg="Executing migration" id="add unique index on token to alert_image table" grafana | logger=migrator t=2024-08-03T17:01:51.964440056Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.371409ms grafana | logger=migrator t=2024-08-03T17:01:51.969820232Z level=info msg="Executing migration" id="support longer URLs in alert_image table" grafana | logger=migrator t=2024-08-03T17:01:51.969885152Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=65.34µs grafana | logger=migrator t=2024-08-03T17:01:51.9741433Z level=info msg="Executing migration" id=create_alert_configuration_history_table grafana | logger=migrator t=2024-08-03T17:01:51.975125596Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=980.646µs grafana | logger=migrator t=2024-08-03T17:01:51.980069499Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" grafana | logger=migrator t=2024-08-03T17:01:51.981012196Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=943.636µs grafana | logger=migrator t=2024-08-03T17:01:51.987099285Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" grafana | logger=migrator t=2024-08-03T17:01:51.987502578Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" grafana | logger=migrator t=2024-08-03T17:01:51.99086944Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" grafana | logger=migrator t=2024-08-03T17:01:51.991291963Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=422.323µs grafana | logger=migrator t=2024-08-03T17:01:51.994350213Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" grafana | logger=migrator t=2024-08-03T17:01:51.995643592Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.292739ms grafana | logger=migrator t=2024-08-03T17:01:52.002306796Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" grafana | logger=migrator t=2024-08-03T17:01:52.015997509Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=13.691563ms grafana | logger=migrator t=2024-08-03T17:01:52.019862181Z level=info msg="Executing migration" id="create library_element table v1" grafana | logger=migrator t=2024-08-03T17:01:52.021256733Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=1.399112ms grafana | logger=migrator t=2024-08-03T17:01:52.025875822Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" grafana | logger=migrator t=2024-08-03T17:01:52.027062072Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.18349ms grafana | logger=migrator t=2024-08-03T17:01:52.032663748Z level=info msg="Executing migration" id="create library_element_connection table v1" grafana | logger=migrator t=2024-08-03T17:01:52.033493435Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=830.007µs grafana | logger=migrator t=2024-08-03T17:01:52.037922363Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" grafana | logger=migrator t=2024-08-03T17:01:52.03890897Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=986.608µs grafana | logger=migrator t=2024-08-03T17:01:52.0448481Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" grafana | logger=migrator t=2024-08-03T17:01:52.046193132Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.345532ms grafana | logger=migrator t=2024-08-03T17:01:52.054475251Z level=info msg="Executing migration" id="increase max description length to 2048" grafana | logger=migrator t=2024-08-03T17:01:52.054507761Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=34.55µs grafana | logger=migrator t=2024-08-03T17:01:52.059122919Z level=info msg="Executing migration" id="alter library_element model to mediumtext" grafana | logger=migrator t=2024-08-03T17:01:52.059315021Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=86.89µs grafana | logger=migrator t=2024-08-03T17:01:52.064564174Z level=info msg="Executing migration" id="add library_element folder uid" grafana | logger=migrator t=2024-08-03T17:01:52.072089057Z level=info msg="Migration successfully executed" id="add library_element folder uid" duration=7.523503ms grafana | logger=migrator t=2024-08-03T17:01:52.075160763Z level=info msg="Executing migration" id="populate library_element folder_uid" grafana | logger=migrator t=2024-08-03T17:01:52.075597466Z level=info msg="Migration successfully executed" id="populate library_element folder_uid" duration=436.563µs grafana | logger=migrator t=2024-08-03T17:01:52.080252826Z level=info msg="Executing migration" id="add index library_element org_id-folder_uid-name-kind" grafana | logger=migrator t=2024-08-03T17:01:52.081505566Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_uid-name-kind" duration=1.25209ms grafana | logger=migrator t=2024-08-03T17:01:52.085878173Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" grafana | logger=migrator t=2024-08-03T17:01:52.086263516Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=385.283µs grafana | logger=migrator t=2024-08-03T17:01:52.090734893Z level=info msg="Executing migration" id="create data_keys table" grafana | logger=migrator t=2024-08-03T17:01:52.092443097Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.706854ms grafana | logger=migrator t=2024-08-03T17:01:52.097250317Z level=info msg="Executing migration" id="create secrets table" grafana | logger=migrator t=2024-08-03T17:01:52.09868446Z level=info msg="Migration successfully executed" id="create secrets table" duration=1.433963ms grafana | logger=migrator t=2024-08-03T17:01:52.106568275Z level=info msg="Executing migration" id="rename data_keys name column to id" grafana | logger=migrator t=2024-08-03T17:01:52.140042044Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=33.473739ms grafana | logger=migrator t=2024-08-03T17:01:52.156177459Z level=info msg="Executing migration" id="add name column into data_keys" grafana | logger=migrator t=2024-08-03T17:01:52.166243873Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=10.072384ms grafana | logger=migrator t=2024-08-03T17:01:52.173414883Z level=info msg="Executing migration" id="copy data_keys id column values into name" grafana | logger=migrator t=2024-08-03T17:01:52.173564034Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=149.351µs grafana | logger=migrator t=2024-08-03T17:01:52.179491583Z level=info msg="Executing migration" id="rename data_keys name column to label" grafana | logger=migrator t=2024-08-03T17:01:52.20898714Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=29.495937ms grafana | logger=migrator t=2024-08-03T17:01:52.212930772Z level=info msg="Executing migration" id="rename data_keys id column back to name" grafana | logger=migrator t=2024-08-03T17:01:52.243277666Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=30.345724ms grafana | logger=migrator t=2024-08-03T17:01:52.262489946Z level=info msg="Executing migration" id="create kv_store table v1" grafana | logger=migrator t=2024-08-03T17:01:52.263660115Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=1.174289ms grafana | logger=migrator t=2024-08-03T17:01:52.267042144Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" grafana | logger=migrator t=2024-08-03T17:01:52.268120693Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.078719ms grafana | logger=migrator t=2024-08-03T17:01:52.27139372Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" grafana | logger=migrator t=2024-08-03T17:01:52.271595452Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=206.582µs grafana | logger=migrator t=2024-08-03T17:01:52.277082058Z level=info msg="Executing migration" id="create permission table" grafana | logger=migrator t=2024-08-03T17:01:52.277950856Z level=info msg="Migration successfully executed" id="create permission table" duration=868.668µs grafana | logger=migrator t=2024-08-03T17:01:52.281965169Z level=info msg="Executing migration" id="add unique index permission.role_id" grafana | logger=migrator t=2024-08-03T17:01:52.282923606Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=958.167µs grafana | logger=migrator t=2024-08-03T17:01:52.286140663Z level=info msg="Executing migration" id="add unique index role_id_action_scope" grafana | logger=migrator t=2024-08-03T17:01:52.287194792Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.036519ms grafana | logger=migrator t=2024-08-03T17:01:52.292269265Z level=info msg="Executing migration" id="create role table" grafana | logger=migrator t=2024-08-03T17:01:52.293099042Z level=info msg="Migration successfully executed" id="create role table" duration=829.847µs grafana | logger=migrator t=2024-08-03T17:01:52.296332048Z level=info msg="Executing migration" id="add column display_name" grafana | logger=migrator t=2024-08-03T17:01:52.30372561Z level=info msg="Migration successfully executed" id="add column display_name" duration=7.393302ms grafana | logger=migrator t=2024-08-03T17:01:52.307039548Z level=info msg="Executing migration" id="add column group_name" grafana | logger=migrator t=2024-08-03T17:01:52.31444495Z level=info msg="Migration successfully executed" id="add column group_name" duration=7.405092ms grafana | logger=migrator t=2024-08-03T17:01:52.320793043Z level=info msg="Executing migration" id="add index role.org_id" grafana | logger=migrator t=2024-08-03T17:01:52.322185214Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=1.402471ms grafana | logger=migrator t=2024-08-03T17:01:52.32646437Z level=info msg="Executing migration" id="add unique index role_org_id_name" grafana | logger=migrator t=2024-08-03T17:01:52.327583899Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.119229ms grafana | logger=migrator t=2024-08-03T17:01:52.332532671Z level=info msg="Executing migration" id="add index role_org_id_uid" grafana | logger=migrator t=2024-08-03T17:01:52.334404396Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.876196ms grafana | logger=migrator t=2024-08-03T17:01:52.34324984Z level=info msg="Executing migration" id="create team role table" grafana | logger=migrator t=2024-08-03T17:01:52.344316009Z level=info msg="Migration successfully executed" id="create team role table" duration=1.068339ms grafana | logger=migrator t=2024-08-03T17:01:52.34794299Z level=info msg="Executing migration" id="add index team_role.org_id" grafana | logger=migrator t=2024-08-03T17:01:52.348987018Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.043718ms grafana | logger=migrator t=2024-08-03T17:01:52.353211813Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" grafana | logger=migrator t=2024-08-03T17:01:52.354314263Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.102029ms grafana | logger=migrator t=2024-08-03T17:01:52.365922769Z level=info msg="Executing migration" id="add index team_role.team_id" grafana | logger=migrator t=2024-08-03T17:01:52.367105429Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.18213ms grafana | logger=migrator t=2024-08-03T17:01:52.377412665Z level=info msg="Executing migration" id="create user role table" grafana | logger=migrator t=2024-08-03T17:01:52.378838067Z level=info msg="Migration successfully executed" id="create user role table" duration=1.423552ms grafana | logger=migrator t=2024-08-03T17:01:52.387274658Z level=info msg="Executing migration" id="add index user_role.org_id" grafana | logger=migrator t=2024-08-03T17:01:52.388685449Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.410241ms grafana | logger=migrator t=2024-08-03T17:01:52.399790622Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" grafana | logger=migrator t=2024-08-03T17:01:52.401701058Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.909306ms grafana | logger=migrator t=2024-08-03T17:01:52.410828354Z level=info msg="Executing migration" id="add index user_role.user_id" grafana | logger=migrator t=2024-08-03T17:01:52.412887451Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=2.057937ms grafana | logger=migrator t=2024-08-03T17:01:52.423245757Z level=info msg="Executing migration" id="create builtin role table" grafana | logger=migrator t=2024-08-03T17:01:52.424139565Z level=info msg="Migration successfully executed" id="create builtin role table" duration=892.998µs grafana | logger=migrator t=2024-08-03T17:01:52.432830868Z level=info msg="Executing migration" id="add index builtin_role.role_id" grafana | logger=migrator t=2024-08-03T17:01:52.434311019Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.479251ms grafana | logger=migrator t=2024-08-03T17:01:52.460568959Z level=info msg="Executing migration" id="add index builtin_role.name" grafana | logger=migrator t=2024-08-03T17:01:52.461706038Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.141949ms grafana | logger=migrator t=2024-08-03T17:01:52.465827033Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" grafana | logger=migrator t=2024-08-03T17:01:52.471593621Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=5.766188ms grafana | logger=migrator t=2024-08-03T17:01:52.476954646Z level=info msg="Executing migration" id="add index builtin_role.org_id" grafana | logger=migrator t=2024-08-03T17:01:52.477732332Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=777.446µs grafana | logger=migrator t=2024-08-03T17:01:52.480934729Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" grafana | logger=migrator t=2024-08-03T17:01:52.481683195Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=748.366µs grafana | logger=migrator t=2024-08-03T17:01:52.48577863Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" grafana | logger=migrator t=2024-08-03T17:01:52.486542906Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=763.667µs grafana | logger=migrator t=2024-08-03T17:01:52.491251035Z level=info msg="Executing migration" id="add unique index role.uid" grafana | logger=migrator t=2024-08-03T17:01:52.492056692Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=814.037µs grafana | logger=migrator t=2024-08-03T17:01:52.496149745Z level=info msg="Executing migration" id="create seed assignment table" grafana | logger=migrator t=2024-08-03T17:01:52.496734231Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=584.976µs grafana | logger=migrator t=2024-08-03T17:01:52.499792576Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" grafana | logger=migrator t=2024-08-03T17:01:52.500573243Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=780.227µs grafana | logger=migrator t=2024-08-03T17:01:52.504465545Z level=info msg="Executing migration" id="add column hidden to role table" grafana | logger=migrator t=2024-08-03T17:01:52.511135411Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=6.668156ms grafana | logger=migrator t=2024-08-03T17:01:52.514141496Z level=info msg="Executing migration" id="permission kind migration" grafana | logger=migrator t=2024-08-03T17:01:52.519806923Z level=info msg="Migration successfully executed" id="permission kind migration" duration=5.664937ms grafana | logger=migrator t=2024-08-03T17:01:52.523622735Z level=info msg="Executing migration" id="permission attribute migration" grafana | logger=migrator t=2024-08-03T17:01:52.529219821Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=5.598206ms grafana | logger=migrator t=2024-08-03T17:01:52.533119465Z level=info msg="Executing migration" id="permission identifier migration" grafana | logger=migrator t=2024-08-03T17:01:52.539123515Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=6.00298ms grafana | logger=migrator t=2024-08-03T17:01:52.570236084Z level=info msg="Executing migration" id="add permission identifier index" grafana | logger=migrator t=2024-08-03T17:01:52.571829867Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.599463ms grafana | logger=migrator t=2024-08-03T17:01:52.575138915Z level=info msg="Executing migration" id="add permission action scope role_id index" grafana | logger=migrator t=2024-08-03T17:01:52.575896301Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=757.146µs grafana | logger=migrator t=2024-08-03T17:01:52.580270817Z level=info msg="Executing migration" id="remove permission role_id action scope index" grafana | logger=migrator t=2024-08-03T17:01:52.580981274Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=710.967µs grafana | logger=migrator t=2024-08-03T17:01:52.585791563Z level=info msg="Executing migration" id="create query_history table v1" grafana | logger=migrator t=2024-08-03T17:01:52.58647107Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=679.237µs grafana | logger=migrator t=2024-08-03T17:01:52.590012839Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" grafana | logger=migrator t=2024-08-03T17:01:52.590783475Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=770.446µs grafana | logger=migrator t=2024-08-03T17:01:52.594378986Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" grafana | logger=migrator t=2024-08-03T17:01:52.594608598Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=237.102µs grafana | logger=migrator t=2024-08-03T17:01:52.597915615Z level=info msg="Executing migration" id="rbac disabled migrator" grafana | logger=migrator t=2024-08-03T17:01:52.598065176Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=149.881µs grafana | logger=migrator t=2024-08-03T17:01:52.603188309Z level=info msg="Executing migration" id="teams permissions migration" grafana | logger=migrator t=2024-08-03T17:01:52.603789764Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=601.245µs grafana | logger=migrator t=2024-08-03T17:01:52.607453544Z level=info msg="Executing migration" id="dashboard permissions" grafana | logger=migrator t=2024-08-03T17:01:52.60816861Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=715.816µs grafana | logger=migrator t=2024-08-03T17:01:52.613829267Z level=info msg="Executing migration" id="dashboard permissions uid scopes" grafana | logger=migrator t=2024-08-03T17:01:52.615015158Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=1.185831ms grafana | logger=migrator t=2024-08-03T17:01:52.619750228Z level=info msg="Executing migration" id="drop managed folder create actions" grafana | logger=migrator t=2024-08-03T17:01:52.620140891Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=393.174µs grafana | logger=migrator t=2024-08-03T17:01:52.625216173Z level=info msg="Executing migration" id="alerting notification permissions" grafana | logger=migrator t=2024-08-03T17:01:52.625722707Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=506.424µs grafana | logger=migrator t=2024-08-03T17:01:52.629572239Z level=info msg="Executing migration" id="create query_history_star table v1" grafana | logger=migrator t=2024-08-03T17:01:52.630368605Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=796.146µs grafana | logger=migrator t=2024-08-03T17:01:52.634817203Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" grafana | logger=migrator t=2024-08-03T17:01:52.635784141Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=966.888µs grafana | logger=migrator t=2024-08-03T17:01:52.642665929Z level=info msg="Executing migration" id="add column org_id in query_history_star" grafana | logger=migrator t=2024-08-03T17:01:52.648622048Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=5.955989ms grafana | logger=migrator t=2024-08-03T17:01:52.653354878Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" grafana | logger=migrator t=2024-08-03T17:01:52.653399138Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=44.93µs grafana | logger=migrator t=2024-08-03T17:01:52.656636635Z level=info msg="Executing migration" id="create correlation table v1" grafana | logger=migrator t=2024-08-03T17:01:52.657395921Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=758.776µs grafana | logger=migrator t=2024-08-03T17:01:52.661653657Z level=info msg="Executing migration" id="add index correlations.uid" grafana | logger=migrator t=2024-08-03T17:01:52.662528164Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=890.467µs grafana | logger=migrator t=2024-08-03T17:01:52.665732491Z level=info msg="Executing migration" id="add index correlations.source_uid" grafana | logger=migrator t=2024-08-03T17:01:52.666579908Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=847.417µs grafana | logger=migrator t=2024-08-03T17:01:52.669793064Z level=info msg="Executing migration" id="add correlation config column" grafana | logger=migrator t=2024-08-03T17:01:52.676029637Z level=info msg="Migration successfully executed" id="add correlation config column" duration=6.203913ms grafana | logger=migrator t=2024-08-03T17:01:52.680429373Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" grafana | logger=migrator t=2024-08-03T17:01:52.681400752Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=973.899µs grafana | logger=migrator t=2024-08-03T17:01:52.685961499Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" grafana | logger=migrator t=2024-08-03T17:01:52.686782207Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=820.618µs grafana | logger=migrator t=2024-08-03T17:01:52.689568809Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" grafana | logger=migrator t=2024-08-03T17:01:52.706759413Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=17.181364ms grafana | logger=migrator t=2024-08-03T17:01:52.70989346Z level=info msg="Executing migration" id="create correlation v2" grafana | logger=migrator t=2024-08-03T17:01:52.710789057Z level=info msg="Migration successfully executed" id="create correlation v2" duration=895.758µs grafana | logger=migrator t=2024-08-03T17:01:52.716574545Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" grafana | logger=migrator t=2024-08-03T17:01:52.717342501Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=768.266µs grafana | logger=migrator t=2024-08-03T17:01:52.720199655Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" grafana | logger=migrator t=2024-08-03T17:01:52.720951642Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=750.897µs grafana | logger=migrator t=2024-08-03T17:01:52.726053445Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" grafana | logger=migrator t=2024-08-03T17:01:52.726825651Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=772.566µs grafana | logger=migrator t=2024-08-03T17:01:52.729600264Z level=info msg="Executing migration" id="copy correlation v1 to v2" grafana | logger=migrator t=2024-08-03T17:01:52.729774585Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=174.361µs grafana | logger=migrator t=2024-08-03T17:01:52.732608789Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" grafana | logger=migrator t=2024-08-03T17:01:52.733206164Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=597.745µs grafana | logger=migrator t=2024-08-03T17:01:52.737867133Z level=info msg="Executing migration" id="add provisioning column" grafana | logger=migrator t=2024-08-03T17:01:52.743726322Z level=info msg="Migration successfully executed" id="add provisioning column" duration=5.857979ms grafana | logger=migrator t=2024-08-03T17:01:52.748766574Z level=info msg="Executing migration" id="create entity_events table" grafana | logger=migrator t=2024-08-03T17:01:52.749449519Z level=info msg="Migration successfully executed" id="create entity_events table" duration=683.695µs grafana | logger=migrator t=2024-08-03T17:01:52.752685246Z level=info msg="Executing migration" id="create dashboard public config v1" grafana | logger=migrator t=2024-08-03T17:01:52.753465213Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=780.306µs grafana | logger=migrator t=2024-08-03T17:01:52.759451062Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2024-08-03T17:01:52.759784046Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2024-08-03T17:01:52.764312113Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2024-08-03T17:01:52.765236382Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2024-08-03T17:01:52.770092471Z level=info msg="Executing migration" id="Drop old dashboard public config table" grafana | logger=migrator t=2024-08-03T17:01:52.771446323Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=1.355552ms grafana | logger=migrator t=2024-08-03T17:01:52.777952337Z level=info msg="Executing migration" id="recreate dashboard public config v1" grafana | logger=migrator t=2024-08-03T17:01:52.779055086Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.102749ms grafana | logger=migrator t=2024-08-03T17:01:52.783001309Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2024-08-03T17:01:52.784101228Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.099609ms grafana | logger=migrator t=2024-08-03T17:01:52.791909064Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2024-08-03T17:01:52.793909531Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=2.005128ms grafana | logger=migrator t=2024-08-03T17:01:52.800616226Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" grafana | logger=migrator t=2024-08-03T17:01:52.801723296Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.10672ms grafana | logger=migrator t=2024-08-03T17:01:52.808170879Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" grafana | logger=migrator t=2024-08-03T17:01:52.810149226Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.983497ms grafana | logger=migrator t=2024-08-03T17:01:52.817286035Z level=info msg="Executing migration" id="Drop public config table" grafana | logger=migrator t=2024-08-03T17:01:52.819324163Z level=info msg="Migration successfully executed" id="Drop public config table" duration=2.037237ms grafana | logger=migrator t=2024-08-03T17:01:52.823786349Z level=info msg="Executing migration" id="Recreate dashboard public config v2" grafana | logger=migrator t=2024-08-03T17:01:52.82496185Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.175151ms grafana | logger=migrator t=2024-08-03T17:01:52.829839491Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" grafana | logger=migrator t=2024-08-03T17:01:52.83099417Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.15061ms grafana | logger=migrator t=2024-08-03T17:01:52.834354108Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" grafana | logger=migrator t=2024-08-03T17:01:52.835512148Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.15777ms grafana | logger=migrator t=2024-08-03T17:01:52.839674792Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" grafana | logger=migrator t=2024-08-03T17:01:52.840797431Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.122499ms grafana | logger=migrator t=2024-08-03T17:01:52.845284869Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" grafana | logger=migrator t=2024-08-03T17:01:52.867241142Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=21.953903ms grafana | logger=migrator t=2024-08-03T17:01:52.898357992Z level=info msg="Executing migration" id="add annotations_enabled column" grafana | logger=migrator t=2024-08-03T17:01:52.907095055Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=8.739733ms grafana | logger=migrator t=2024-08-03T17:01:52.913516638Z level=info msg="Executing migration" id="add time_selection_enabled column" grafana | logger=migrator t=2024-08-03T17:01:52.921878758Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=8.36098ms grafana | logger=migrator t=2024-08-03T17:01:52.926554917Z level=info msg="Executing migration" id="delete orphaned public dashboards" grafana | logger=migrator t=2024-08-03T17:01:52.927150022Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=595.675µs grafana | logger=migrator t=2024-08-03T17:01:52.932859419Z level=info msg="Executing migration" id="add share column" grafana | logger=migrator t=2024-08-03T17:01:52.942251918Z level=info msg="Migration successfully executed" id="add share column" duration=9.390059ms grafana | logger=migrator t=2024-08-03T17:01:52.945676387Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" grafana | logger=migrator t=2024-08-03T17:01:52.945820358Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=137.001µs grafana | logger=migrator t=2024-08-03T17:01:52.950319336Z level=info msg="Executing migration" id="create file table" grafana | logger=migrator t=2024-08-03T17:01:52.951095302Z level=info msg="Migration successfully executed" id="create file table" duration=774.966µs grafana | logger=migrator t=2024-08-03T17:01:52.955514498Z level=info msg="Executing migration" id="file table idx: path natural pk" grafana | logger=migrator t=2024-08-03T17:01:52.960030486Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=4.515088ms grafana | logger=migrator t=2024-08-03T17:01:52.970139001Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" grafana | logger=migrator t=2024-08-03T17:01:52.972211688Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=2.072777ms grafana | logger=migrator t=2024-08-03T17:01:52.98196585Z level=info msg="Executing migration" id="create file_meta table" grafana | logger=migrator t=2024-08-03T17:01:52.983429071Z level=info msg="Migration successfully executed" id="create file_meta table" duration=1.461231ms grafana | logger=migrator t=2024-08-03T17:01:52.987931779Z level=info msg="Executing migration" id="file table idx: path key" grafana | logger=migrator t=2024-08-03T17:01:52.989705064Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.772965ms grafana | logger=migrator t=2024-08-03T17:01:52.994459113Z level=info msg="Executing migration" id="set path collation in file table" grafana | logger=migrator t=2024-08-03T17:01:52.994521434Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=63.301µs grafana | logger=migrator t=2024-08-03T17:01:52.997393288Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" grafana | logger=migrator t=2024-08-03T17:01:52.997456948Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=63.47µs grafana | logger=migrator t=2024-08-03T17:01:53.000457173Z level=info msg="Executing migration" id="managed permissions migration" grafana | logger=migrator t=2024-08-03T17:01:53.001264381Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=811.998µs grafana | logger=migrator t=2024-08-03T17:01:53.005807318Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" grafana | logger=migrator t=2024-08-03T17:01:53.006150022Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=343.704µs grafana | logger=migrator t=2024-08-03T17:01:53.011538616Z level=info msg="Executing migration" id="RBAC action name migrator" grafana | logger=migrator t=2024-08-03T17:01:53.012911318Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.377602ms grafana | logger=migrator t=2024-08-03T17:01:53.017708839Z level=info msg="Executing migration" id="Add UID column to playlist" grafana | logger=migrator t=2024-08-03T17:01:53.024426565Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=6.717626ms grafana | logger=migrator t=2024-08-03T17:01:53.029994912Z level=info msg="Executing migration" id="Update uid column values in playlist" grafana | logger=migrator t=2024-08-03T17:01:53.030122903Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=128.851µs grafana | logger=migrator t=2024-08-03T17:01:53.036404885Z level=info msg="Executing migration" id="Add index for uid in playlist" grafana | logger=migrator t=2024-08-03T17:01:53.037204942Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=800.117µs grafana | logger=migrator t=2024-08-03T17:01:53.041133405Z level=info msg="Executing migration" id="update group index for alert rules" grafana | logger=migrator t=2024-08-03T17:01:53.041553818Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=421.473µs grafana | logger=migrator t=2024-08-03T17:01:53.045581183Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" grafana | logger=migrator t=2024-08-03T17:01:53.045801475Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=220.762µs grafana | logger=migrator t=2024-08-03T17:01:53.049720877Z level=info msg="Executing migration" id="admin only folder/dashboard permission" grafana | logger=migrator t=2024-08-03T17:01:53.05006458Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=343.893µs grafana | logger=migrator t=2024-08-03T17:01:53.054177015Z level=info msg="Executing migration" id="add action column to seed_assignment" grafana | logger=migrator t=2024-08-03T17:01:53.06075617Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=6.578875ms grafana | logger=migrator t=2024-08-03T17:01:53.067511406Z level=info msg="Executing migration" id="add scope column to seed_assignment" grafana | logger=migrator t=2024-08-03T17:01:53.075471993Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=7.960587ms grafana | logger=migrator t=2024-08-03T17:01:53.078404418Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" grafana | logger=migrator t=2024-08-03T17:01:53.07980344Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.395882ms grafana | logger=migrator t=2024-08-03T17:01:53.08342793Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" grafana | logger=migrator t=2024-08-03T17:01:53.156392462Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=72.966172ms grafana | logger=migrator t=2024-08-03T17:01:53.161261844Z level=info msg="Executing migration" id="add unique index builtin_role_name back" grafana | logger=migrator t=2024-08-03T17:01:53.16205956Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=796.826µs grafana | logger=migrator t=2024-08-03T17:01:53.168273062Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" grafana | logger=migrator t=2024-08-03T17:01:53.169821905Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.546213ms grafana | logger=migrator t=2024-08-03T17:01:53.176265619Z level=info msg="Executing migration" id="add primary key to seed_assigment" grafana | logger=migrator t=2024-08-03T17:01:53.201290889Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=25.0208ms grafana | logger=migrator t=2024-08-03T17:01:53.217765988Z level=info msg="Executing migration" id="add origin column to seed_assignment" grafana | logger=migrator t=2024-08-03T17:01:53.22508237Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=7.319431ms grafana | logger=migrator t=2024-08-03T17:01:53.228967292Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" grafana | logger=migrator t=2024-08-03T17:01:53.229344915Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=377.883µs grafana | logger=migrator t=2024-08-03T17:01:53.232792904Z level=info msg="Executing migration" id="prevent seeding OnCall access" grafana | logger=migrator t=2024-08-03T17:01:53.233027465Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=235.421µs grafana | logger=migrator t=2024-08-03T17:01:53.235633867Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" grafana | logger=migrator t=2024-08-03T17:01:53.235899341Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=266.074µs grafana | logger=migrator t=2024-08-03T17:01:53.241000363Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" grafana | logger=migrator t=2024-08-03T17:01:53.241188094Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=185.351µs grafana | logger=migrator t=2024-08-03T17:01:53.243384473Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" grafana | logger=migrator t=2024-08-03T17:01:53.243590945Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=206.272µs grafana | logger=migrator t=2024-08-03T17:01:53.247452027Z level=info msg="Executing migration" id="create folder table" grafana | logger=migrator t=2024-08-03T17:01:53.248479126Z level=info msg="Migration successfully executed" id="create folder table" duration=1.026919ms grafana | logger=migrator t=2024-08-03T17:01:53.253060474Z level=info msg="Executing migration" id="Add index for parent_uid" grafana | logger=migrator t=2024-08-03T17:01:53.254142493Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.082189ms grafana | logger=migrator t=2024-08-03T17:01:53.256999287Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" grafana | logger=migrator t=2024-08-03T17:01:53.257970155Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=970.138µs grafana | logger=migrator t=2024-08-03T17:01:53.260773098Z level=info msg="Executing migration" id="Update folder title length" grafana | logger=migrator t=2024-08-03T17:01:53.260803779Z level=info msg="Migration successfully executed" id="Update folder title length" duration=31.061µs grafana | logger=migrator t=2024-08-03T17:01:53.262894177Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" grafana | logger=migrator t=2024-08-03T17:01:53.264019867Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.11601ms grafana | logger=migrator t=2024-08-03T17:01:53.268459013Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" grafana | logger=migrator t=2024-08-03T17:01:53.269447081Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=987.078µs grafana | logger=migrator t=2024-08-03T17:01:53.273221544Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" grafana | logger=migrator t=2024-08-03T17:01:53.274250072Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.028748ms grafana | logger=migrator t=2024-08-03T17:01:53.279474906Z level=info msg="Executing migration" id="Sync dashboard and folder table" grafana | logger=migrator t=2024-08-03T17:01:53.280122671Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=652.055µs grafana | logger=migrator t=2024-08-03T17:01:53.283146956Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" grafana | logger=migrator t=2024-08-03T17:01:53.283624381Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=478.695µs grafana | logger=migrator t=2024-08-03T17:01:53.287964087Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" grafana | logger=migrator t=2024-08-03T17:01:53.288957855Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=991.908µs grafana | logger=migrator t=2024-08-03T17:01:53.292928269Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" grafana | logger=migrator t=2024-08-03T17:01:53.293797906Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=868.767µs grafana | logger=migrator t=2024-08-03T17:01:53.296896602Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" grafana | logger=migrator t=2024-08-03T17:01:53.297919341Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=1.025289ms grafana | logger=migrator t=2024-08-03T17:01:53.301856274Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" grafana | logger=migrator t=2024-08-03T17:01:53.303083485Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=1.225921ms grafana | logger=migrator t=2024-08-03T17:01:53.306030679Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" grafana | logger=migrator t=2024-08-03T17:01:53.307167538Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.136369ms grafana | logger=migrator t=2024-08-03T17:01:53.310147143Z level=info msg="Executing migration" id="create anon_device table" grafana | logger=migrator t=2024-08-03T17:01:53.311101411Z level=info msg="Migration successfully executed" id="create anon_device table" duration=954.428µs grafana | logger=migrator t=2024-08-03T17:01:53.315455588Z level=info msg="Executing migration" id="add unique index anon_device.device_id" grafana | logger=migrator t=2024-08-03T17:01:53.321428038Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=5.95824ms grafana | logger=migrator t=2024-08-03T17:01:53.333317307Z level=info msg="Executing migration" id="add index anon_device.updated_at" grafana | logger=migrator t=2024-08-03T17:01:53.334521438Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.209051ms grafana | logger=migrator t=2024-08-03T17:01:53.33833807Z level=info msg="Executing migration" id="create signing_key table" grafana | logger=migrator t=2024-08-03T17:01:53.339038996Z level=info msg="Migration successfully executed" id="create signing_key table" duration=701.116µs grafana | logger=migrator t=2024-08-03T17:01:53.341326415Z level=info msg="Executing migration" id="add unique index signing_key.key_id" grafana | logger=migrator t=2024-08-03T17:01:53.342125321Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=798.816µs grafana | logger=migrator t=2024-08-03T17:01:53.344454921Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" grafana | logger=migrator t=2024-08-03T17:01:53.345312259Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=857.278µs grafana | logger=migrator t=2024-08-03T17:01:53.349730395Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" grafana | logger=migrator t=2024-08-03T17:01:53.349950617Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=220.832µs grafana | logger=migrator t=2024-08-03T17:01:53.35264615Z level=info msg="Executing migration" id="Add folder_uid for dashboard" grafana | logger=migrator t=2024-08-03T17:01:53.359280836Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=6.633166ms grafana | logger=migrator t=2024-08-03T17:01:53.362375982Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" grafana | logger=migrator t=2024-08-03T17:01:53.362960737Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=585.655µs grafana | logger=migrator t=2024-08-03T17:01:53.36577547Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" grafana | logger=migrator t=2024-08-03T17:01:53.365791401Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=18.201µs grafana | logger=migrator t=2024-08-03T17:01:53.36929633Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" grafana | logger=migrator t=2024-08-03T17:01:53.370148707Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=852.207µs grafana | logger=migrator t=2024-08-03T17:01:53.37293298Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" grafana | logger=migrator t=2024-08-03T17:01:53.37294808Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=15.72µs grafana | logger=migrator t=2024-08-03T17:01:53.375161959Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" grafana | logger=migrator t=2024-08-03T17:01:53.376111878Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=949.269µs grafana | logger=migrator t=2024-08-03T17:01:53.380090591Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" grafana | logger=migrator t=2024-08-03T17:01:53.380976198Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=885.777µs grafana | logger=migrator t=2024-08-03T17:01:53.383968373Z level=info msg="Executing migration" id="create sso_setting table" grafana | logger=migrator t=2024-08-03T17:01:53.385743368Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.771975ms grafana | logger=migrator t=2024-08-03T17:01:53.389244457Z level=info msg="Executing migration" id="copy kvstore migration status to each org" grafana | logger=migrator t=2024-08-03T17:01:53.390317477Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=1.0746ms grafana | logger=migrator t=2024-08-03T17:01:53.394000897Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" grafana | logger=migrator t=2024-08-03T17:01:53.394504471Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=505.034µs grafana | logger=migrator t=2024-08-03T17:01:53.399733535Z level=info msg="Executing migration" id="managed dashboard permissions annotation actions migration" grafana | logger=migrator t=2024-08-03T17:01:53.400592442Z level=info msg="Migration successfully executed" id="managed dashboard permissions annotation actions migration" duration=861.197µs grafana | logger=migrator t=2024-08-03T17:01:53.403617158Z level=info msg="Executing migration" id="create cloud_migration table v1" grafana | logger=migrator t=2024-08-03T17:01:53.404855219Z level=info msg="Migration successfully executed" id="create cloud_migration table v1" duration=1.238631ms grafana | logger=migrator t=2024-08-03T17:01:53.407912014Z level=info msg="Executing migration" id="create cloud_migration_run table v1" grafana | logger=migrator t=2024-08-03T17:01:53.408861172Z level=info msg="Migration successfully executed" id="create cloud_migration_run table v1" duration=948.598µs grafana | logger=migrator t=2024-08-03T17:01:53.411938778Z level=info msg="Executing migration" id="add stack_id column" grafana | logger=migrator t=2024-08-03T17:01:53.424168321Z level=info msg="Migration successfully executed" id="add stack_id column" duration=12.219563ms grafana | logger=migrator t=2024-08-03T17:01:53.427541489Z level=info msg="Executing migration" id="add region_slug column" grafana | logger=migrator t=2024-08-03T17:01:53.435706218Z level=info msg="Migration successfully executed" id="add region_slug column" duration=8.164939ms grafana | logger=migrator t=2024-08-03T17:01:53.438647292Z level=info msg="Executing migration" id="add cluster_slug column" grafana | logger=migrator t=2024-08-03T17:01:53.448762457Z level=info msg="Migration successfully executed" id="add cluster_slug column" duration=10.112125ms grafana | logger=migrator t=2024-08-03T17:01:53.454312593Z level=info msg="Executing migration" id="add migration uid column" grafana | logger=migrator t=2024-08-03T17:01:53.468541784Z level=info msg="Migration successfully executed" id="add migration uid column" duration=14.226191ms grafana | logger=migrator t=2024-08-03T17:01:53.480521364Z level=info msg="Executing migration" id="Update uid column values for migration" grafana | logger=migrator t=2024-08-03T17:01:53.480846016Z level=info msg="Migration successfully executed" id="Update uid column values for migration" duration=329.772µs grafana | logger=migrator t=2024-08-03T17:01:53.483990603Z level=info msg="Executing migration" id="Add unique index migration_uid" grafana | logger=migrator t=2024-08-03T17:01:53.485716558Z level=info msg="Migration successfully executed" id="Add unique index migration_uid" duration=1.727625ms grafana | logger=migrator t=2024-08-03T17:01:53.492098191Z level=info msg="Executing migration" id="add migration run uid column" grafana | logger=migrator t=2024-08-03T17:01:53.502187636Z level=info msg="Migration successfully executed" id="add migration run uid column" duration=10.080245ms grafana | logger=migrator t=2024-08-03T17:01:53.529808417Z level=info msg="Executing migration" id="Update uid column values for migration run" grafana | logger=migrator t=2024-08-03T17:01:53.530222671Z level=info msg="Migration successfully executed" id="Update uid column values for migration run" duration=419.944µs grafana | logger=migrator t=2024-08-03T17:01:53.535961109Z level=info msg="Executing migration" id="Add unique index migration_run_uid" grafana | logger=migrator t=2024-08-03T17:01:53.537791514Z level=info msg="Migration successfully executed" id="Add unique index migration_run_uid" duration=1.835035ms grafana | logger=migrator t=2024-08-03T17:01:53.542685546Z level=info msg="Executing migration" id="alter kv_store.value to longtext" grafana | logger=migrator t=2024-08-03T17:01:53.542926888Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=243.082µs grafana | logger=migrator t=2024-08-03T17:01:53.546484697Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" grafana | logger=migrator t=2024-08-03T17:01:53.556542672Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=10.044235ms grafana | logger=migrator t=2024-08-03T17:01:53.562643813Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" grafana | logger=migrator t=2024-08-03T17:01:53.573787356Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=11.142113ms grafana | logger=migrator t=2024-08-03T17:01:53.577809341Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" grafana | logger=migrator t=2024-08-03T17:01:53.578359405Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=550.974µs grafana | logger=migrator t=2024-08-03T17:01:53.582112636Z level=info msg="Executing migration" id="managed folder permissions alerting silences actions migration" grafana | logger=migrator t=2024-08-03T17:01:53.58255195Z level=info msg="Migration successfully executed" id="managed folder permissions alerting silences actions migration" duration=439.644µs grafana | logger=migrator t=2024-08-03T17:01:53.58733268Z level=info msg="Executing migration" id="add record column to alert_rule table" grafana | logger=migrator t=2024-08-03T17:01:53.59795699Z level=info msg="Migration successfully executed" id="add record column to alert_rule table" duration=10.62017ms grafana | logger=migrator t=2024-08-03T17:01:53.603791369Z level=info msg="Executing migration" id="add record column to alert_rule_version table" grafana | logger=migrator t=2024-08-03T17:01:53.611096279Z level=info msg="Migration successfully executed" id="add record column to alert_rule_version table" duration=7.305ms grafana | logger=migrator t=2024-08-03T17:01:53.614374258Z level=info msg="migrations completed" performed=572 skipped=0 duration=4.135227193s grafana | logger=migrator t=2024-08-03T17:01:53.614878882Z level=info msg="Unlocking database" grafana | logger=sqlstore t=2024-08-03T17:01:53.627130564Z level=info msg="Created default admin" user=admin grafana | logger=sqlstore t=2024-08-03T17:01:53.628119763Z level=info msg="Created default organization" grafana | logger=secrets t=2024-08-03T17:01:53.633793711Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 grafana | logger=plugin.angulardetectorsprovider.dynamic t=2024-08-03T17:01:53.687764134Z level=info msg="Restored cache from database" duration=527.135µs grafana | logger=plugin.store t=2024-08-03T17:01:53.691181272Z level=info msg="Loading plugins..." grafana | logger=plugins.registration t=2024-08-03T17:01:53.72310222Z level=error msg="Could not register plugin" pluginId=xychart error="plugin xychart is already registered" grafana | logger=plugins.initialization t=2024-08-03T17:01:53.72314421Z level=error msg="Could not initialize plugin" pluginId=xychart error="plugin xychart is already registered" grafana | logger=local.finder t=2024-08-03T17:01:53.723213951Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled grafana | logger=plugin.store t=2024-08-03T17:01:53.723227901Z level=info msg="Plugins loaded" count=54 duration=32.048779ms grafana | logger=query_data t=2024-08-03T17:01:53.729747366Z level=info msg="Query Service initialization" grafana | logger=live.push_http t=2024-08-03T17:01:53.734289724Z level=info msg="Live Push Gateway initialization" grafana | logger=ngalert.notifier.alertmanager org=1 t=2024-08-03T17:01:53.74096011Z level=info msg="Applying new configuration to Alertmanager" configHash=d2c56faca6af2a5772ff4253222f7386 grafana | logger=ngalert.state.manager t=2024-08-03T17:01:53.747990969Z level=info msg="Running in alternative execution of Error/NoData mode" grafana | logger=infra.usagestats.collector t=2024-08-03T17:01:53.751312137Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 grafana | logger=provisioning.datasources t=2024-08-03T17:01:53.753794737Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz grafana | logger=provisioning.alerting t=2024-08-03T17:01:53.786399662Z level=info msg="starting to provision alerting" grafana | logger=provisioning.alerting t=2024-08-03T17:01:53.786431532Z level=info msg="finished to provision alerting" grafana | logger=ngalert.state.manager t=2024-08-03T17:01:53.787306049Z level=info msg="Warming state cache for startup" grafana | logger=ngalert.state.manager t=2024-08-03T17:01:53.787756513Z level=info msg="State cache has been initialized" states=0 duration=450.424µs grafana | logger=ngalert.multiorg.alertmanager t=2024-08-03T17:01:53.787783174Z level=info msg="Starting MultiOrg Alertmanager" grafana | logger=ngalert.scheduler t=2024-08-03T17:01:53.787800914Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1 grafana | logger=ticker t=2024-08-03T17:01:53.787895035Z level=info msg=starting first_tick=2024-08-03T17:02:00Z grafana | logger=grafanaStorageLogger t=2024-08-03T17:01:53.789416167Z level=info msg="Storage starting" grafana | logger=http.server t=2024-08-03T17:01:53.790634507Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= grafana | logger=provisioning.dashboard t=2024-08-03T17:01:53.868817533Z level=info msg="starting to provision dashboards" grafana | logger=plugins.update.checker t=2024-08-03T17:01:53.879057509Z level=info msg="Update check succeeded" duration=89.451681ms grafana | logger=sqlstore.transactions t=2024-08-03T17:01:53.89588241Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" grafana | logger=grafana.update.checker t=2024-08-03T17:01:53.904641984Z level=info msg="Update check succeeded" duration=116.710519ms grafana | logger=sqlstore.transactions t=2024-08-03T17:01:53.909612946Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" grafana | logger=sqlstore.transactions t=2024-08-03T17:01:53.930294729Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" grafana | logger=sqlstore.transactions t=2024-08-03T17:01:53.941237131Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked" grafana | logger=plugin.angulardetectorsprovider.dynamic t=2024-08-03T17:01:53.955525261Z level=info msg="Patterns update finished" duration=150.887286ms grafana | logger=grafana-apiserver t=2024-08-03T17:01:54.114581373Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2024-08-03T17:01:54.115122907Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" grafana | logger=provisioning.dashboard t=2024-08-03T17:01:54.155356129Z level=info msg="finished to provision dashboards" grafana | logger=infra.usagestats t=2024-08-03T17:02:44.800387775Z level=info msg="Usage stats are ready to report" =================================== ======== Logs from kafka ======== kafka | ===> User kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) kafka | ===> Configuring ... kafka | Running in Zookeeper mode... kafka | ===> Running preflight checks ... kafka | ===> Check if /var/lib/kafka/data is writable ... kafka | ===> Check if Zookeeper is healthy ... kafka | [2024-08-03 17:01:51,220] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-03 17:01:51,221] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-03 17:01:51,221] INFO Client environment:java.version=17.0.12 (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-03 17:01:51,221] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-03 17:01:51,221] INFO Client environment:java.home=/usr/lib/jvm/java-17-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-03 17:01:51,221] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/commons-validator-1.7.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/jackson-core-2.16.0.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.16.0.jar:/usr/share/java/cp-base-new/commons-io-2.11.0.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-databind-2.16.0.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.7.0-ccs.jar:/usr/share/java/cp-base-new/kafka-metadata-7.7.0-ccs.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.4.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.16.0.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.6-3.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/commons-beanutils-1.9.4.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.16.0.jar:/usr/share/java/cp-base-new/commons-digester-2.1.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.16.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/jackson-annotations-2.16.0.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.7.0.jar:/usr/share/java/cp-base-new/zookeeper-3.8.4.jar:/usr/share/java/cp-base-new/utility-belt-7.7.0-130.jar:/usr/share/java/cp-base-new/common-utils-7.7.0.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/error_prone_annotations-2.10.0.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/checker-qual-3.19.0.jar:/usr/share/java/cp-base-new/kafka-raft-7.7.0-ccs.jar:/usr/share/java/cp-base-new/kafka-server-7.7.0-ccs.jar:/usr/share/java/cp-base-new/kafka-server-common-7.7.0-ccs.jar:/usr/share/java/cp-base-new/pcollections-4.0.1.jar:/usr/share/java/cp-base-new/commons-logging-1.2.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/commons-collections-3.2.2.jar:/usr/share/java/cp-base-new/caffeine-2.9.3.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/kafka_2.13-7.7.0-ccs.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/kafka-clients-7.7.0-ccs.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.7.0-ccs.jar:/usr/share/java/cp-base-new/kafka-storage-7.7.0-ccs.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.7.0-ccs.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar:/usr/share/java/cp-base-new/jose4j-0.9.5.jar (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-03 17:01:51,221] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-03 17:01:51,221] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-03 17:01:51,221] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-03 17:01:51,221] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-03 17:01:51,221] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-03 17:01:51,221] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-03 17:01:51,221] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-03 17:01:51,221] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-03 17:01:51,221] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-03 17:01:51,221] INFO Client environment:os.memory.free=500MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-03 17:01:51,221] INFO Client environment:os.memory.max=8044MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-03 17:01:51,221] INFO Client environment:os.memory.total=512MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-03 17:01:51,224] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@43a25848 (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-03 17:01:51,226] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2024-08-03 17:01:51,230] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2024-08-03 17:01:51,236] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2024-08-03 17:01:51,250] INFO Opening socket connection to server zookeeper/172.17.0.3:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2024-08-03 17:01:51,250] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) kafka | [2024-08-03 17:01:51,258] INFO Socket connection established, initiating session, client: /172.17.0.6:34324, server: zookeeper/172.17.0.3:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2024-08-03 17:01:51,286] INFO Session establishment complete on server zookeeper/172.17.0.3:2181, session id = 0x100000254ef0000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) kafka | [2024-08-03 17:01:51,409] INFO Session: 0x100000254ef0000 closed (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-03 17:01:51,409] INFO EventThread shut down for session: 0x100000254ef0000 (org.apache.zookeeper.ClientCnxn) kafka | Using log4j config /etc/kafka/log4j.properties kafka | ===> Launching ... kafka | ===> Launching kafka ... kafka | [2024-08-03 17:01:51,924] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) kafka | [2024-08-03 17:01:52,135] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2024-08-03 17:01:52,211] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) kafka | [2024-08-03 17:01:52,212] INFO starting (kafka.server.KafkaServer) kafka | [2024-08-03 17:01:52,213] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) kafka | [2024-08-03 17:01:52,225] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-08-03 17:01:52,228] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-03 17:01:52,228] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-03 17:01:52,228] INFO Client environment:java.version=17.0.12 (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-03 17:01:52,228] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-03 17:01:52,228] INFO Client environment:java.home=/usr/lib/jvm/java-17-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-03 17:01:52,228] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/connect-transforms-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/protobuf-java-3.23.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-mirror-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/netty-common-4.1.110.Final.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.16.2.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.110.Final.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/connect-api-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.16.2.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.6-3.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.16.2.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.110.Final.jar:/usr/bin/../share/java/kafka/connect-runtime-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/kafka-shell-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.110.Final.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/scala-library-2.13.12.jar:/usr/bin/../share/java/kafka/trogdor-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.110.Final.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.110.Final.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.110.Final.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.110.Final.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.110.Final.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.12.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/kafka-raft-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-tools-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-server-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-databind-2.16.2.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/opentelemetry-proto-1.0.0-alpha.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.16.2.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.16.2.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-core-2.16.2.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/kafka-clients-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/connect-json-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-03 17:01:52,228] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-03 17:01:52,228] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-03 17:01:52,228] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-03 17:01:52,228] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-03 17:01:52,229] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-03 17:01:52,229] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-03 17:01:52,229] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-03 17:01:52,229] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-03 17:01:52,229] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-03 17:01:52,229] INFO Client environment:os.memory.free=986MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-03 17:01:52,229] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-03 17:01:52,229] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-03 17:01:52,230] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@609bcfb6 (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-03 17:01:52,234] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2024-08-03 17:01:52,238] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2024-08-03 17:01:52,245] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-08-03 17:01:52,245] INFO Opening socket connection to server zookeeper/172.17.0.3:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2024-08-03 17:01:52,248] INFO Socket connection established, initiating session, client: /172.17.0.6:34326, server: zookeeper/172.17.0.3:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2024-08-03 17:01:52,263] INFO Session establishment complete on server zookeeper/172.17.0.3:2181, session id = 0x100000254ef0001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) kafka | [2024-08-03 17:01:52,272] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-08-03 17:01:52,618] INFO Cluster ID = Iw6ZavHnRjau9r3O-pEHnQ (kafka.server.KafkaServer) kafka | [2024-08-03 17:01:52,676] INFO KafkaConfig values: kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 kafka | alter.config.policy.class.name = null kafka | alter.log.dirs.replication.quota.window.num = 11 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 kafka | authorizer.class.name = kafka | auto.create.topics.enable = true kafka | auto.include.jmx.reporter = true kafka | auto.leader.rebalance.enable = true kafka | background.threads = 10 kafka | broker.heartbeat.interval.ms = 2000 kafka | broker.id = 1 kafka | broker.id.generation.enable = true kafka | broker.rack = null kafka | broker.session.timeout.ms = 9000 kafka | client.quota.callback.class = null kafka | compression.type = producer kafka | connection.failed.authentication.delay.ms = 100 kafka | connections.max.idle.ms = 600000 kafka | connections.max.reauth.ms = 0 kafka | control.plane.listener.name = null kafka | controlled.shutdown.enable = true kafka | controlled.shutdown.max.retries = 3 kafka | controlled.shutdown.retry.backoff.ms = 5000 kafka | controller.listener.names = null kafka | controller.quorum.append.linger.ms = 25 kafka | controller.quorum.election.backoff.max.ms = 1000 kafka | controller.quorum.election.timeout.ms = 1000 kafka | controller.quorum.fetch.timeout.ms = 2000 kafka | controller.quorum.request.timeout.ms = 2000 kafka | controller.quorum.retry.backoff.ms = 20 kafka | controller.quorum.voters = [] kafka | controller.quota.window.num = 11 kafka | controller.quota.window.size.seconds = 1 kafka | controller.socket.timeout.ms = 30000 kafka | create.topic.policy.class.name = null kafka | default.replication.factor = 1 kafka | delegation.token.expiry.check.interval.ms = 3600000 kafka | delegation.token.expiry.time.ms = 86400000 kafka | delegation.token.master.key = null kafka | delegation.token.max.lifetime.ms = 604800000 kafka | delegation.token.secret.key = null kafka | delete.records.purgatory.purge.interval.requests = 1 kafka | delete.topic.enable = true kafka | early.start.listeners = null kafka | eligible.leader.replicas.enable = false kafka | fetch.max.bytes = 57671680 kafka | fetch.purgatory.purge.interval.requests = 1000 kafka | group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.UniformAssignor, org.apache.kafka.coordinator.group.assignor.RangeAssignor] kafka | group.consumer.heartbeat.interval.ms = 5000 kafka | group.consumer.max.heartbeat.interval.ms = 15000 kafka | group.consumer.max.session.timeout.ms = 60000 kafka | group.consumer.max.size = 2147483647 kafka | group.consumer.min.heartbeat.interval.ms = 5000 kafka | group.consumer.min.session.timeout.ms = 45000 kafka | group.consumer.session.timeout.ms = 45000 kafka | group.coordinator.new.enable = false kafka | group.coordinator.rebalance.protocols = [classic] kafka | group.coordinator.threads = 1 kafka | group.initial.rebalance.delay.ms = 3000 kafka | group.max.session.timeout.ms = 1800000 kafka | group.max.size = 2147483647 kafka | group.min.session.timeout.ms = 6000 kafka | initial.broker.registration.timeout.ms = 60000 kafka | inter.broker.listener.name = PLAINTEXT kafka | inter.broker.protocol.version = 3.7-IV4 kafka | kafka.metrics.polling.interval.secs = 10 kafka | kafka.metrics.reporters = [] kafka | leader.imbalance.check.interval.seconds = 300 kafka | leader.imbalance.per.broker.percentage = 10 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 kafka | log.cleaner.backoff.ms = 15000 kafka | log.cleaner.dedupe.buffer.size = 134217728 kafka | log.cleaner.delete.retention.ms = 86400000 kafka | log.cleaner.enable = true kafka | log.cleaner.io.buffer.load.factor = 0.9 kafka | log.cleaner.io.buffer.size = 524288 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 kafka | log.cleaner.min.cleanable.ratio = 0.5 kafka | log.cleaner.min.compaction.lag.ms = 0 kafka | log.cleaner.threads = 1 kafka | log.cleanup.policy = [delete] kafka | log.dir = /tmp/kafka-logs kafka | log.dirs = /var/lib/kafka/data kafka | log.flush.interval.messages = 9223372036854775807 kafka | log.flush.interval.ms = null kafka | log.flush.offset.checkpoint.interval.ms = 60000 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 kafka | log.index.interval.bytes = 4096 kafka | log.index.size.max.bytes = 10485760 kafka | log.local.retention.bytes = -2 kafka | log.local.retention.ms = -2 kafka | log.message.downconversion.enable = true kafka | log.message.format.version = 3.0-IV1 kafka | log.message.timestamp.after.max.ms = 9223372036854775807 kafka | log.message.timestamp.before.max.ms = 9223372036854775807 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 kafka | log.message.timestamp.type = CreateTime kafka | log.preallocate = false kafka | log.retention.bytes = -1 kafka | log.retention.check.interval.ms = 300000 kafka | log.retention.hours = 168 kafka | log.retention.minutes = null kafka | log.retention.ms = null kafka | log.roll.hours = 168 kafka | log.roll.jitter.hours = 0 kafka | log.roll.jitter.ms = null kafka | log.roll.ms = null kafka | log.segment.bytes = 1073741824 kafka | log.segment.delete.delay.ms = 60000 kafka | max.connection.creation.rate = 2147483647 kafka | max.connections = 2147483647 kafka | max.connections.per.ip = 2147483647 kafka | max.connections.per.ip.overrides = kafka | max.incremental.fetch.session.cache.slots = 1000 kafka | message.max.bytes = 1048588 kafka | metadata.log.dir = null kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 kafka | metadata.log.max.snapshot.interval.ms = 3600000 kafka | metadata.log.segment.bytes = 1073741824 kafka | metadata.log.segment.min.bytes = 8388608 kafka | metadata.log.segment.ms = 604800000 kafka | metadata.max.idle.interval.ms = 500 kafka | metadata.max.retention.bytes = 104857600 kafka | metadata.max.retention.ms = 604800000 kafka | metric.reporters = [] kafka | metrics.num.samples = 2 kafka | metrics.recording.level = INFO kafka | metrics.sample.window.ms = 30000 kafka | min.insync.replicas = 1 kafka | node.id = 1 kafka | num.io.threads = 8 kafka | num.network.threads = 3 kafka | num.partitions = 1 kafka | num.recovery.threads.per.data.dir = 1 kafka | num.replica.alter.log.dirs.threads = null kafka | num.replica.fetchers = 1 kafka | offset.metadata.max.bytes = 4096 kafka | offsets.commit.required.acks = -1 kafka | offsets.commit.timeout.ms = 5000 kafka | offsets.load.buffer.size = 5242880 kafka | offsets.retention.check.interval.ms = 600000 kafka | offsets.retention.minutes = 10080 kafka | offsets.topic.compression.codec = 0 kafka | offsets.topic.num.partitions = 50 kafka | offsets.topic.replication.factor = 1 kafka | offsets.topic.segment.bytes = 104857600 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding kafka | password.encoder.iterations = 4096 kafka | password.encoder.key.length = 128 kafka | password.encoder.keyfactory.algorithm = null kafka | password.encoder.old.secret = null kafka | password.encoder.secret = null kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder kafka | process.roles = [] kafka | producer.id.expiration.check.interval.ms = 600000 kafka | producer.id.expiration.ms = 86400000 kafka | producer.purgatory.purge.interval.requests = 1000 kafka | queued.max.request.bytes = -1 kafka | queued.max.requests = 500 kafka | quota.window.num = 11 kafka | quota.window.size.seconds = 1 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 kafka | remote.log.manager.task.interval.ms = 30000 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 kafka | remote.log.manager.task.retry.backoff.ms = 500 kafka | remote.log.manager.task.retry.jitter = 0.2 kafka | remote.log.manager.thread.pool.size = 10 kafka | remote.log.metadata.custom.metadata.max.bytes = 128 kafka | remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager kafka | remote.log.metadata.manager.class.path = null kafka | remote.log.metadata.manager.impl.prefix = rlmm.config. kafka | remote.log.metadata.manager.listener.name = null kafka | remote.log.reader.max.pending.tasks = 100 kafka | remote.log.reader.threads = 10 kafka | remote.log.storage.manager.class.name = null kafka | remote.log.storage.manager.class.path = null kafka | remote.log.storage.manager.impl.prefix = rsm.config. kafka | remote.log.storage.system.enable = false kafka | replica.fetch.backoff.ms = 1000 kafka | replica.fetch.max.bytes = 1048576 kafka | replica.fetch.min.bytes = 1 kafka | replica.fetch.response.max.bytes = 10485760 kafka | replica.fetch.wait.max.ms = 500 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 kafka | replica.lag.time.max.ms = 30000 kafka | replica.selector.class = null kafka | replica.socket.receive.buffer.bytes = 65536 kafka | replica.socket.timeout.ms = 30000 kafka | replication.quota.window.num = 11 kafka | replication.quota.window.size.seconds = 1 kafka | request.timeout.ms = 30000 kafka | reserved.broker.max.id = 1000 kafka | sasl.client.callback.handler.class = null kafka | sasl.enabled.mechanisms = [GSSAPI] kafka | sasl.jaas.config = null kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | sasl.kerberos.min.time.before.relogin = 60000 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] kafka | sasl.kerberos.service.name = null kafka | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | sasl.login.callback.handler.class = null kafka | sasl.login.class = null kafka | sasl.login.connect.timeout.ms = null kafka | sasl.login.read.timeout.ms = null kafka | sasl.login.refresh.buffer.seconds = 300 kafka | sasl.login.refresh.min.period.seconds = 60 kafka | sasl.login.refresh.window.factor = 0.8 kafka | sasl.login.refresh.window.jitter = 0.05 kafka | sasl.login.retry.backoff.max.ms = 10000 kafka | sasl.login.retry.backoff.ms = 100 kafka | sasl.mechanism.controller.protocol = GSSAPI kafka | sasl.mechanism.inter.broker.protocol = GSSAPI kafka | sasl.oauthbearer.clock.skew.seconds = 30 kafka | sasl.oauthbearer.expected.audience = null kafka | sasl.oauthbearer.expected.issuer = null kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | sasl.oauthbearer.jwks.endpoint.url = null kafka | sasl.oauthbearer.scope.claim.name = scope kafka | sasl.oauthbearer.sub.claim.name = sub kafka | sasl.oauthbearer.token.endpoint.url = null kafka | sasl.server.callback.handler.class = null kafka | sasl.server.max.receive.size = 524288 kafka | security.inter.broker.protocol = PLAINTEXT kafka | security.providers = null kafka | server.max.startup.time.ms = 9223372036854775807 kafka | socket.connection.setup.timeout.max.ms = 30000 kafka | socket.connection.setup.timeout.ms = 10000 kafka | socket.listen.backlog.size = 50 kafka | socket.receive.buffer.bytes = 102400 kafka | socket.request.max.bytes = 104857600 kafka | socket.send.buffer.bytes = 102400 kafka | ssl.allow.dn.changes = false kafka | ssl.allow.san.changes = false kafka | ssl.cipher.suites = [] kafka | ssl.client.auth = none kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | ssl.endpoint.identification.algorithm = https kafka | ssl.engine.factory.class = null kafka | ssl.key.password = null kafka | ssl.keymanager.algorithm = SunX509 kafka | ssl.keystore.certificate.chain = null kafka | ssl.keystore.key = null kafka | ssl.keystore.location = null kafka | ssl.keystore.password = null kafka | ssl.keystore.type = JKS kafka | ssl.principal.mapping.rules = DEFAULT kafka | ssl.protocol = TLSv1.3 kafka | ssl.provider = null kafka | ssl.secure.random.implementation = null kafka | ssl.trustmanager.algorithm = PKIX kafka | ssl.truststore.certificates = null kafka | ssl.truststore.location = null kafka | ssl.truststore.password = null kafka | ssl.truststore.type = JKS kafka | telemetry.max.bytes = 1048576 kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 kafka | transaction.max.timeout.ms = 900000 kafka | transaction.partition.verification.enable = true kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 kafka | transaction.state.log.load.buffer.size = 5242880 kafka | transaction.state.log.min.isr = 2 kafka | transaction.state.log.num.partitions = 50 kafka | transaction.state.log.replication.factor = 3 kafka | transaction.state.log.segment.bytes = 104857600 kafka | transactional.id.expiration.ms = 604800000 kafka | unclean.leader.election.enable = false kafka | unstable.api.versions.enable = false kafka | unstable.metadata.versions.enable = false kafka | zookeeper.clientCnxnSocket = null kafka | zookeeper.connect = zookeeper:2181 kafka | zookeeper.connection.timeout.ms = null kafka | zookeeper.max.in.flight.requests = 10 kafka | zookeeper.metadata.migration.enable = false kafka | zookeeper.metadata.migration.min.batch.size = 200 kafka | zookeeper.session.timeout.ms = 18000 kafka | zookeeper.set.acl = false kafka | zookeeper.ssl.cipher.suites = null kafka | zookeeper.ssl.client.enable = false kafka | zookeeper.ssl.crl.enable = false kafka | zookeeper.ssl.enabled.protocols = null kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS kafka | zookeeper.ssl.keystore.location = null kafka | zookeeper.ssl.keystore.password = null kafka | zookeeper.ssl.keystore.type = null kafka | zookeeper.ssl.ocsp.enable = false kafka | zookeeper.ssl.protocol = TLSv1.2 kafka | zookeeper.ssl.truststore.location = null kafka | zookeeper.ssl.truststore.password = null kafka | zookeeper.ssl.truststore.type = null kafka | (kafka.server.KafkaConfig) kafka | [2024-08-03 17:01:52,711] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2024-08-03 17:01:52,711] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2024-08-03 17:01:52,712] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2024-08-03 17:01:52,715] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2024-08-03 17:01:52,720] INFO [KafkaServer id=1] Rewriting /var/lib/kafka/data/meta.properties (kafka.server.KafkaServer) kafka | [2024-08-03 17:01:52,785] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) kafka | [2024-08-03 17:01:52,792] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) kafka | [2024-08-03 17:01:52,800] INFO Loaded 0 logs in 14ms (kafka.log.LogManager) kafka | [2024-08-03 17:01:52,802] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) kafka | [2024-08-03 17:01:52,803] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) kafka | [2024-08-03 17:01:52,817] INFO Starting the log cleaner (kafka.log.LogCleaner) kafka | [2024-08-03 17:01:52,861] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) kafka | [2024-08-03 17:01:52,874] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) kafka | [2024-08-03 17:01:52,886] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) kafka | [2024-08-03 17:01:52,912] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.NodeToControllerRequestThread) kafka | [2024-08-03 17:01:53,183] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2024-08-03 17:01:53,209] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) kafka | [2024-08-03 17:01:53,210] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2024-08-03 17:01:53,216] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) kafka | [2024-08-03 17:01:53,223] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.NodeToControllerRequestThread) kafka | [2024-08-03 17:01:53,244] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-08-03 17:01:53,246] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-08-03 17:01:53,247] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-08-03 17:01:53,249] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-08-03 17:01:53,253] INFO [ExpirationReaper-1-RemoteFetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-08-03 17:01:53,268] INFO [AddPartitionsToTxnSenderThread-1]: Starting (kafka.server.AddPartitionsToTxnManager) kafka | [2024-08-03 17:01:53,270] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) kafka | [2024-08-03 17:01:53,302] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) kafka | [2024-08-03 17:01:53,334] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1722704513316,1722704513316,1,0,0,72057604052811777,258,0,27 kafka | (kafka.zk.KafkaZkClient) kafka | [2024-08-03 17:01:53,335] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) kafka | [2024-08-03 17:01:53,369] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) kafka | [2024-08-03 17:01:53,375] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-08-03 17:01:53,385] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-08-03 17:01:53,385] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-08-03 17:01:53,396] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) kafka | [2024-08-03 17:01:53,404] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-03 17:01:53,405] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) kafka | [2024-08-03 17:01:53,410] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) kafka | [2024-08-03 17:01:53,410] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-03 17:01:53,415] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) kafka | [2024-08-03 17:01:53,449] INFO [MetadataCache brokerId=1] Updated cache from existing None to latest Features(version=3.7-IV4, finalizedFeatures={}, finalizedFeaturesEpoch=0). (kafka.server.metadata.ZkMetadataCache) kafka | [2024-08-03 17:01:53,449] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) kafka | [2024-08-03 17:01:53,450] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2024-08-03 17:01:53,455] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) kafka | [2024-08-03 17:01:53,455] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2024-08-03 17:01:53,457] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) kafka | [2024-08-03 17:01:53,463] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) kafka | [2024-08-03 17:01:53,466] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) kafka | [2024-08-03 17:01:53,510] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-08-03 17:01:53,513] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) kafka | [2024-08-03 17:01:53,535] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) kafka | [2024-08-03 17:01:53,547] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) kafka | [2024-08-03 17:01:53,556] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) kafka | [2024-08-03 17:01:53,557] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) kafka | [2024-08-03 17:01:53,557] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) kafka | [2024-08-03 17:01:53,558] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) kafka | [2024-08-03 17:01:53,558] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) kafka | [2024-08-03 17:01:53,561] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) kafka | [2024-08-03 17:01:53,562] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) kafka | [2024-08-03 17:01:53,562] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) kafka | [2024-08-03 17:01:53,562] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) kafka | [2024-08-03 17:01:53,567] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) kafka | [2024-08-03 17:01:53,568] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) kafka | [2024-08-03 17:01:53,572] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) kafka | [2024-08-03 17:01:53,577] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) kafka | [2024-08-03 17:01:53,578] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) kafka | [2024-08-03 17:01:53,580] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) kafka | [2024-08-03 17:01:53,580] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) kafka | [2024-08-03 17:01:53,581] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) kafka | [2024-08-03 17:01:53,581] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) kafka | [2024-08-03 17:01:53,583] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) kafka | [2024-08-03 17:01:53,583] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) kafka | [2024-08-03 17:01:53,586] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) kafka | [2024-08-03 17:01:53,591] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) kafka | [2024-08-03 17:01:53,598] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) kafka | [2024-08-03 17:01:53,598] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) kafka | [2024-08-03 17:01:53,598] INFO [Controller id=1, targetBrokerId=1] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) kafka | [2024-08-03 17:01:53,599] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) kafka | [2024-08-03 17:01:53,599] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) kafka | [2024-08-03 17:01:53,604] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) kafka | [2024-08-03 17:01:53,605] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) kafka | [2024-08-03 17:01:53,629] WARN [Controller id=1, targetBrokerId=1] Connection to node 1 (kafka/172.17.0.6:9092) could not be established. Node may not be available. (org.apache.kafka.clients.NetworkClient) kafka | [2024-08-03 17:01:53,634] INFO Kafka version: 7.7.0-ccs (org.apache.kafka.common.utils.AppInfoParser) kafka | [2024-08-03 17:01:53,634] INFO Kafka commitId: 342a7370342e6bbcecbdf171dbe71cf87ce67c49 (org.apache.kafka.common.utils.AppInfoParser) kafka | [2024-08-03 17:01:53,634] INFO Kafka startTimeMs: 1722704513629 (org.apache.kafka.common.utils.AppInfoParser) kafka | [2024-08-03 17:01:53,636] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) kafka | [2024-08-03 17:01:53,638] WARN [RequestSendThread controllerId=1] Controller 1's connection to broker kafka:9092 (id: 1 rack: null) was unsuccessful (kafka.controller.RequestSendThread) kafka | java.io.IOException: Connection to kafka:9092 (id: 1 rack: null) failed. kafka | at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:71) kafka | at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:298) kafka | at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:251) kafka | at org.apache.kafka.server.util.ShutdownableThread.run(ShutdownableThread.java:135) kafka | [2024-08-03 17:01:53,643] INFO [Controller id=1, targetBrokerId=1] Client requested connection close from node 1 (org.apache.kafka.clients.NetworkClient) kafka | [2024-08-03 17:01:53,647] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) kafka | [2024-08-03 17:01:53,749] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) kafka | [2024-08-03 17:01:53,836] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new ZK controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.NodeToControllerRequestThread) kafka | [2024-08-03 17:01:53,837] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new ZK controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.NodeToControllerRequestThread) kafka | [2024-08-03 17:01:53,867] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-08-03 17:01:58,649] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) kafka | [2024-08-03 17:01:58,651] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) kafka | [2024-08-03 17:02:28,582] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) kafka | [2024-08-03 17:02:28,585] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2024-08-03 17:02:28,586] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2024-08-03 17:02:28,596] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) kafka | [2024-08-03 17:02:28,642] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(IAVnKXbsRTO6_G-as64MVg),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(ELVdtwNmTj21v7GFX-sLyg),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) kafka | [2024-08-03 17:02:28,643] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) kafka | [2024-08-03 17:02:28,648] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-03 17:02:28,653] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-03 17:02:28,653] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-03 17:02:28,653] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-03 17:02:28,654] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-03 17:02:28,654] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-03 17:02:28,654] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-03 17:02:28,654] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-03 17:02:28,654] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-03 17:02:28,654] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-03 17:02:28,654] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-03 17:02:28,655] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-03 17:02:28,656] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-03 17:02:28,657] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-03 17:02:28,657] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-03 17:02:28,657] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-03 17:02:28,657] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-03 17:02:28,658] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-03 17:02:28,658] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-03 17:02:28,658] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-03 17:02:28,658] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-03 17:02:28,658] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-03 17:02:28,658] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-03 17:02:28,660] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-03 17:02:28,660] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-03 17:02:28,660] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-03 17:02:28,660] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-03 17:02:28,661] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-03 17:02:28,661] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-03 17:02:28,661] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-03 17:02:28,661] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-03 17:02:28,663] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-03 17:02:28,663] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-03 17:02:28,663] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-03 17:02:28,663] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-03 17:02:28,663] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-03 17:02:28,663] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-03 17:02:28,664] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-03 17:02:28,664] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-03 17:02:28,664] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-03 17:02:28,664] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-03 17:02:28,664] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-03 17:02:28,664] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-03 17:02:28,664] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-03 17:02:28,664] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-03 17:02:28,665] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-03 17:02:28,665] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-03 17:02:28,665] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-03 17:02:28,665] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-03 17:02:28,665] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-03 17:02:28,665] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-03 17:02:28,666] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2024-08-03 17:02:28,674] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-03 17:02:28,674] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-03 17:02:28,678] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-03 17:02:28,678] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-03 17:02:28,678] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-03 17:02:28,678] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-03 17:02:28,679] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-03 17:02:28,679] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-03 17:02:28,683] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-03 17:02:28,683] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-03 17:02:28,683] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-03 17:02:28,683] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-03 17:02:28,683] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-03 17:02:28,683] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-03 17:02:28,683] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-03 17:02:28,683] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-03 17:02:28,683] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-03 17:02:28,683] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-03 17:02:28,683] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-03 17:02:28,683] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-03 17:02:28,683] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-03 17:02:28,683] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-03 17:02:28,683] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-03 17:02:28,683] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-03 17:02:28,683] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-03 17:02:28,683] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-03 17:02:28,683] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-03 17:02:28,683] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-03 17:02:28,683] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-03 17:02:28,683] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-03 17:02:28,683] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-03 17:02:28,683] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-03 17:02:28,683] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-03 17:02:28,683] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-03 17:02:28,684] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-03 17:02:28,684] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-03 17:02:28,684] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-03 17:02:28,684] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-03 17:02:28,684] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-03 17:02:28,684] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-03 17:02:28,684] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-03 17:02:28,684] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-03 17:02:28,684] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-03 17:02:28,684] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-03 17:02:28,684] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-03 17:02:28,684] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-03 17:02:28,684] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-03 17:02:28,684] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-03 17:02:28,684] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-03 17:02:28,684] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-03 17:02:28,684] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-03 17:02:28,684] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2024-08-03 17:02:28,821] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-03 17:02:28,823] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-03 17:02:28,823] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-03 17:02:28,823] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-03 17:02:28,823] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-03 17:02:28,824] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-03 17:02:28,824] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-03 17:02:28,824] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-03 17:02:28,824] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-03 17:02:28,824] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-03 17:02:28,824] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-03 17:02:28,825] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-03 17:02:28,825] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-03 17:02:28,825] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-03 17:02:28,825] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-03 17:02:28,825] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-03 17:02:28,825] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-03 17:02:28,826] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-03 17:02:28,826] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-03 17:02:28,826] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-03 17:02:28,826] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-03 17:02:28,826] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-03 17:02:28,826] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-03 17:02:28,827] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-03 17:02:28,827] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-03 17:02:28,827] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-03 17:02:28,827] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-03 17:02:28,827] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-03 17:02:28,828] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-03 17:02:28,828] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-03 17:02:28,828] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-03 17:02:28,828] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-03 17:02:28,828] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-03 17:02:28,829] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-03 17:02:28,829] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-03 17:02:28,829] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-03 17:02:28,829] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-03 17:02:28,829] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-03 17:02:28,830] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-03 17:02:28,830] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-03 17:02:28,830] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-03 17:02:28,830] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-03 17:02:28,830] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-03 17:02:28,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-03 17:02:28,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-03 17:02:28,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-03 17:02:28,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-03 17:02:28,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-03 17:02:28,832] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-03 17:02:28,832] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-03 17:02:28,832] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-03 17:02:28,837] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) kafka | [2024-08-03 17:02:28,837] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) kafka | [2024-08-03 17:02:28,837] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) kafka | [2024-08-03 17:02:28,837] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) kafka | [2024-08-03 17:02:28,837] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) kafka | [2024-08-03 17:02:28,837] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) kafka | [2024-08-03 17:02:28,838] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) kafka | [2024-08-03 17:02:28,838] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) kafka | [2024-08-03 17:02:28,838] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) kafka | [2024-08-03 17:02:28,838] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) kafka | [2024-08-03 17:02:28,838] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) kafka | [2024-08-03 17:02:28,838] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) kafka | [2024-08-03 17:02:28,839] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) kafka | [2024-08-03 17:02:28,839] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) kafka | [2024-08-03 17:02:28,839] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) kafka | [2024-08-03 17:02:28,839] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) kafka | [2024-08-03 17:02:28,839] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) kafka | [2024-08-03 17:02:28,840] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) kafka | [2024-08-03 17:02:28,840] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) kafka | [2024-08-03 17:02:28,840] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) kafka | [2024-08-03 17:02:28,840] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) kafka | [2024-08-03 17:02:28,840] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) kafka | [2024-08-03 17:02:28,840] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) kafka | [2024-08-03 17:02:28,841] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) kafka | [2024-08-03 17:02:28,841] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) kafka | [2024-08-03 17:02:28,841] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) kafka | [2024-08-03 17:02:28,841] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) kafka | [2024-08-03 17:02:28,841] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) kafka | [2024-08-03 17:02:28,842] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) kafka | [2024-08-03 17:02:28,842] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) kafka | [2024-08-03 17:02:28,842] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) kafka | [2024-08-03 17:02:28,842] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) kafka | [2024-08-03 17:02:28,842] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) kafka | [2024-08-03 17:02:28,842] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) kafka | [2024-08-03 17:02:28,843] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) kafka | [2024-08-03 17:02:28,843] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) kafka | [2024-08-03 17:02:28,843] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) kafka | [2024-08-03 17:02:28,843] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) kafka | [2024-08-03 17:02:28,843] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) kafka | [2024-08-03 17:02:28,843] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) kafka | [2024-08-03 17:02:28,844] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) kafka | [2024-08-03 17:02:28,844] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) kafka | [2024-08-03 17:02:28,844] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) kafka | [2024-08-03 17:02:28,844] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) kafka | [2024-08-03 17:02:28,844] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) kafka | [2024-08-03 17:02:28,845] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) kafka | [2024-08-03 17:02:28,845] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) kafka | [2024-08-03 17:02:28,845] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) kafka | [2024-08-03 17:02:28,845] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) kafka | [2024-08-03 17:02:28,845] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) kafka | [2024-08-03 17:02:28,845] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) kafka | [2024-08-03 17:02:28,846] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) kafka | [2024-08-03 17:02:28,851] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) kafka | [2024-08-03 17:02:28,853] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-03 17:02:28,853] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-03 17:02:28,853] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-03 17:02:28,853] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-03 17:02:28,853] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-03 17:02:28,854] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-03 17:02:28,854] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-03 17:02:28,854] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-03 17:02:28,854] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-03 17:02:28,854] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-03 17:02:28,854] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-03 17:02:28,854] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-03 17:02:28,854] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-03 17:02:28,854] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-03 17:02:28,854] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-03 17:02:28,854] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-03 17:02:28,854] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-03 17:02:28,854] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-03 17:02:28,854] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-03 17:02:28,854] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-03 17:02:28,854] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-03 17:02:28,854] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-03 17:02:28,854] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-03 17:02:28,854] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-03 17:02:28,854] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-03 17:02:28,854] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-03 17:02:28,854] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-03 17:02:28,854] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-03 17:02:28,854] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-03 17:02:28,855] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-03 17:02:28,855] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-03 17:02:28,855] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-03 17:02:28,855] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-03 17:02:28,855] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-03 17:02:28,855] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-03 17:02:28,855] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-03 17:02:28,855] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-03 17:02:28,855] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-03 17:02:28,855] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-03 17:02:28,855] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-03 17:02:28,855] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-03 17:02:28,855] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-03 17:02:28,855] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-03 17:02:28,855] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-03 17:02:28,855] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-03 17:02:28,855] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-03 17:02:28,855] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-03 17:02:28,855] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-03 17:02:28,855] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-03 17:02:28,855] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-03 17:02:28,855] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-03 17:02:28,856] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2024-08-03 17:02:28,863] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) kafka | [2024-08-03 17:02:28,864] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-03 17:02:28,864] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-03 17:02:28,864] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-03 17:02:28,864] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-03 17:02:28,864] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-03 17:02:28,864] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-03 17:02:28,864] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-03 17:02:28,864] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-03 17:02:28,864] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-03 17:02:28,864] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-03 17:02:28,864] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-03 17:02:28,864] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-03 17:02:28,864] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-03 17:02:28,864] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-03 17:02:28,864] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-03 17:02:28,864] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-03 17:02:28,864] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-03 17:02:28,864] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-03 17:02:28,864] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-03 17:02:28,864] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-03 17:02:28,864] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-03 17:02:28,864] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-03 17:02:28,864] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-03 17:02:28,865] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-03 17:02:28,865] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-03 17:02:28,865] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-03 17:02:28,865] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-03 17:02:28,865] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-03 17:02:28,865] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-03 17:02:28,865] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-03 17:02:28,865] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-03 17:02:28,865] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-03 17:02:28,865] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-03 17:02:28,865] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-03 17:02:28,865] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-03 17:02:28,865] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-03 17:02:28,865] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-03 17:02:28,865] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-03 17:02:28,865] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-03 17:02:28,865] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-03 17:02:28,865] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-03 17:02:28,865] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-03 17:02:28,865] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-03 17:02:28,865] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-03 17:02:28,865] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-03 17:02:28,865] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-03 17:02:28,865] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-03 17:02:28,865] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-03 17:02:28,865] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-03 17:02:28,865] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-03 17:02:28,865] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-03 17:02:28,897] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2024-08-03 17:02:28,897] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2024-08-03 17:02:28,897] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2024-08-03 17:02:28,897] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2024-08-03 17:02:28,897] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2024-08-03 17:02:28,897] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2024-08-03 17:02:28,897] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2024-08-03 17:02:28,897] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2024-08-03 17:02:28,897] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2024-08-03 17:02:28,897] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2024-08-03 17:02:28,897] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2024-08-03 17:02:28,897] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2024-08-03 17:02:28,897] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2024-08-03 17:02:28,897] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2024-08-03 17:02:28,897] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2024-08-03 17:02:28,897] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2024-08-03 17:02:28,897] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2024-08-03 17:02:28,897] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2024-08-03 17:02:28,897] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2024-08-03 17:02:28,897] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2024-08-03 17:02:28,897] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2024-08-03 17:02:28,897] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2024-08-03 17:02:28,897] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2024-08-03 17:02:28,897] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2024-08-03 17:02:28,897] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2024-08-03 17:02:28,897] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2024-08-03 17:02:28,897] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2024-08-03 17:02:28,897] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2024-08-03 17:02:28,897] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2024-08-03 17:02:28,897] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2024-08-03 17:02:28,898] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2024-08-03 17:02:28,898] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2024-08-03 17:02:28,898] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2024-08-03 17:02:28,898] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2024-08-03 17:02:28,898] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2024-08-03 17:02:28,898] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2024-08-03 17:02:28,898] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2024-08-03 17:02:28,898] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2024-08-03 17:02:28,898] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2024-08-03 17:02:28,898] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2024-08-03 17:02:28,898] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2024-08-03 17:02:28,898] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2024-08-03 17:02:28,898] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2024-08-03 17:02:28,898] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2024-08-03 17:02:28,898] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2024-08-03 17:02:28,898] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2024-08-03 17:02:28,898] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2024-08-03 17:02:28,898] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2024-08-03 17:02:28,898] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2024-08-03 17:02:28,898] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2024-08-03 17:02:28,898] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2024-08-03 17:02:28,899] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) kafka | [2024-08-03 17:02:28,899] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) kafka | [2024-08-03 17:02:28,945] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-03 17:02:28,955] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-03 17:02:28,956] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:28,957] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:28,959] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(ELVdtwNmTj21v7GFX-sLyg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-03 17:02:28,979] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-03 17:02:28,980] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-03 17:02:28,980] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:28,980] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:28,980] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(ELVdtwNmTj21v7GFX-sLyg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-03 17:02:28,989] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-03 17:02:28,990] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-03 17:02:28,990] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:28,990] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:28,990] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(ELVdtwNmTj21v7GFX-sLyg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-03 17:02:28,999] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-03 17:02:29,001] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-03 17:02:29,001] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,001] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,001] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(ELVdtwNmTj21v7GFX-sLyg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-03 17:02:29,009] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-03 17:02:29,010] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-03 17:02:29,010] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,010] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,012] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(ELVdtwNmTj21v7GFX-sLyg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-03 17:02:29,020] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-03 17:02:29,020] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-03 17:02:29,020] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,020] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,020] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(ELVdtwNmTj21v7GFX-sLyg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-03 17:02:29,029] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-03 17:02:29,030] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-03 17:02:29,030] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,030] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,030] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(ELVdtwNmTj21v7GFX-sLyg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-03 17:02:29,040] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-03 17:02:29,041] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-03 17:02:29,041] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,041] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,041] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(ELVdtwNmTj21v7GFX-sLyg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-03 17:02:29,051] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-03 17:02:29,052] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-03 17:02:29,052] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,052] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,052] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(ELVdtwNmTj21v7GFX-sLyg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-03 17:02:29,059] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-03 17:02:29,059] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-03 17:02:29,059] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,060] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,060] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(ELVdtwNmTj21v7GFX-sLyg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-03 17:02:29,066] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-03 17:02:29,067] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-03 17:02:29,067] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,067] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,067] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(ELVdtwNmTj21v7GFX-sLyg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-03 17:02:29,075] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-03 17:02:29,075] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-03 17:02:29,076] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,076] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,076] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(ELVdtwNmTj21v7GFX-sLyg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-03 17:02:29,084] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-03 17:02:29,084] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-03 17:02:29,085] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,085] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,085] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(ELVdtwNmTj21v7GFX-sLyg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-03 17:02:29,093] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-03 17:02:29,094] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-03 17:02:29,094] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,094] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,094] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(ELVdtwNmTj21v7GFX-sLyg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-03 17:02:29,114] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-03 17:02:29,116] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-03 17:02:29,116] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,116] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,116] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(ELVdtwNmTj21v7GFX-sLyg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-03 17:02:29,128] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-03 17:02:29,128] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-03 17:02:29,129] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,129] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,129] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(ELVdtwNmTj21v7GFX-sLyg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-03 17:02:29,142] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-03 17:02:29,143] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-03 17:02:29,143] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,143] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,143] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(ELVdtwNmTj21v7GFX-sLyg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-03 17:02:29,152] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-03 17:02:29,154] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-03 17:02:29,154] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,154] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,154] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(ELVdtwNmTj21v7GFX-sLyg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-03 17:02:29,280] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-03 17:02:29,281] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-03 17:02:29,281] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,281] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,281] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(ELVdtwNmTj21v7GFX-sLyg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-03 17:02:29,289] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-03 17:02:29,289] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-03 17:02:29,289] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,289] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,289] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(ELVdtwNmTj21v7GFX-sLyg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-03 17:02:29,308] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-03 17:02:29,309] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-03 17:02:29,309] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,309] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,309] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(ELVdtwNmTj21v7GFX-sLyg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-03 17:02:29,321] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-03 17:02:29,322] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-03 17:02:29,322] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,322] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,322] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(ELVdtwNmTj21v7GFX-sLyg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-03 17:02:29,345] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-03 17:02:29,347] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-03 17:02:29,347] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,347] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,347] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(ELVdtwNmTj21v7GFX-sLyg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-03 17:02:29,356] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-03 17:02:29,358] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-03 17:02:29,358] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,359] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,359] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(ELVdtwNmTj21v7GFX-sLyg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-03 17:02:29,370] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-03 17:02:29,371] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-03 17:02:29,371] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,372] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,372] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(ELVdtwNmTj21v7GFX-sLyg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-03 17:02:29,385] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-03 17:02:29,386] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-03 17:02:29,386] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,386] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,386] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(ELVdtwNmTj21v7GFX-sLyg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-03 17:02:29,397] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-03 17:02:29,398] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-03 17:02:29,398] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,398] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,398] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(ELVdtwNmTj21v7GFX-sLyg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-03 17:02:29,412] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-03 17:02:29,415] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-03 17:02:29,415] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,415] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,416] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(ELVdtwNmTj21v7GFX-sLyg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-03 17:02:29,434] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-03 17:02:29,435] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-03 17:02:29,436] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,436] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,436] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(ELVdtwNmTj21v7GFX-sLyg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-03 17:02:29,443] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-03 17:02:29,444] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-03 17:02:29,445] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,445] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,445] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(ELVdtwNmTj21v7GFX-sLyg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-03 17:02:29,455] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-03 17:02:29,456] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-03 17:02:29,456] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,456] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,456] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(ELVdtwNmTj21v7GFX-sLyg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-03 17:02:29,468] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-03 17:02:29,469] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-03 17:02:29,469] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,469] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,469] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(ELVdtwNmTj21v7GFX-sLyg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-03 17:02:29,483] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-03 17:02:29,484] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-03 17:02:29,484] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,484] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,485] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(ELVdtwNmTj21v7GFX-sLyg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-03 17:02:29,491] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-03 17:02:29,493] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-03 17:02:29,493] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,493] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,493] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(ELVdtwNmTj21v7GFX-sLyg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-03 17:02:29,500] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-03 17:02:29,501] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) kafka | [2024-08-03 17:02:29,501] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,501] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,501] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(IAVnKXbsRTO6_G-as64MVg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-03 17:02:29,509] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-03 17:02:29,511] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-03 17:02:29,511] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,511] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,512] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(ELVdtwNmTj21v7GFX-sLyg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-03 17:02:29,517] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-03 17:02:29,518] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-03 17:02:29,518] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,518] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,518] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(ELVdtwNmTj21v7GFX-sLyg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-03 17:02:29,524] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-03 17:02:29,525] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-03 17:02:29,525] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,525] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,525] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(ELVdtwNmTj21v7GFX-sLyg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-03 17:02:29,532] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-03 17:02:29,537] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-03 17:02:29,537] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,537] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,537] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(ELVdtwNmTj21v7GFX-sLyg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-03 17:02:29,546] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-03 17:02:29,546] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-03 17:02:29,546] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,546] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,547] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(ELVdtwNmTj21v7GFX-sLyg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-03 17:02:29,555] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-03 17:02:29,556] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-03 17:02:29,556] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,556] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,556] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(ELVdtwNmTj21v7GFX-sLyg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-03 17:02:29,560] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-03 17:02:29,561] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-03 17:02:29,561] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,561] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,561] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(ELVdtwNmTj21v7GFX-sLyg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-03 17:02:29,570] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-03 17:02:29,570] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-03 17:02:29,571] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,571] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,571] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(ELVdtwNmTj21v7GFX-sLyg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-03 17:02:29,617] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-03 17:02:29,618] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-03 17:02:29,618] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,618] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,618] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(ELVdtwNmTj21v7GFX-sLyg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-03 17:02:29,626] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-03 17:02:29,626] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-03 17:02:29,626] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,626] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,627] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(ELVdtwNmTj21v7GFX-sLyg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-03 17:02:29,634] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-03 17:02:29,645] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-03 17:02:29,646] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,646] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,646] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(ELVdtwNmTj21v7GFX-sLyg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-03 17:02:29,663] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-03 17:02:29,664] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-03 17:02:29,664] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,664] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,665] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(ELVdtwNmTj21v7GFX-sLyg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-03 17:02:29,672] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-03 17:02:29,674] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-03 17:02:29,674] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,674] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,674] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(ELVdtwNmTj21v7GFX-sLyg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-03 17:02:29,682] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-03 17:02:29,682] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-03 17:02:29,682] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,682] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,683] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(ELVdtwNmTj21v7GFX-sLyg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-03 17:02:29,693] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-03 17:02:29,694] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-03 17:02:29,694] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,694] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,695] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(ELVdtwNmTj21v7GFX-sLyg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-03 17:02:29,709] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-03 17:02:29,710] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-03 17:02:29,710] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,710] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-03 17:02:29,711] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(ELVdtwNmTj21v7GFX-sLyg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-03 17:02:29,716] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2024-08-03 17:02:29,716] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2024-08-03 17:02:29,716] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2024-08-03 17:02:29,716] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2024-08-03 17:02:29,716] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2024-08-03 17:02:29,716] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2024-08-03 17:02:29,716] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2024-08-03 17:02:29,716] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2024-08-03 17:02:29,716] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2024-08-03 17:02:29,716] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2024-08-03 17:02:29,716] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2024-08-03 17:02:29,716] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2024-08-03 17:02:29,717] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2024-08-03 17:02:29,717] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2024-08-03 17:02:29,717] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2024-08-03 17:02:29,717] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2024-08-03 17:02:29,717] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2024-08-03 17:02:29,717] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2024-08-03 17:02:29,717] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2024-08-03 17:02:29,717] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2024-08-03 17:02:29,717] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2024-08-03 17:02:29,717] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2024-08-03 17:02:29,717] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2024-08-03 17:02:29,717] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2024-08-03 17:02:29,717] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2024-08-03 17:02:29,718] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2024-08-03 17:02:29,718] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2024-08-03 17:02:29,718] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2024-08-03 17:02:29,718] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2024-08-03 17:02:29,718] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2024-08-03 17:02:29,718] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2024-08-03 17:02:29,718] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2024-08-03 17:02:29,718] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2024-08-03 17:02:29,718] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2024-08-03 17:02:29,718] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2024-08-03 17:02:29,718] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2024-08-03 17:02:29,718] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2024-08-03 17:02:29,718] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2024-08-03 17:02:29,718] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2024-08-03 17:02:29,718] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2024-08-03 17:02:29,718] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2024-08-03 17:02:29,718] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2024-08-03 17:02:29,718] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2024-08-03 17:02:29,718] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2024-08-03 17:02:29,719] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2024-08-03 17:02:29,719] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2024-08-03 17:02:29,719] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2024-08-03 17:02:29,719] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2024-08-03 17:02:29,719] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2024-08-03 17:02:29,719] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2024-08-03 17:02:29,719] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2024-08-03 17:02:29,725] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-03 17:02:29,727] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,729] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-03 17:02:29,729] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,729] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-03 17:02:29,729] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,729] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-03 17:02:29,729] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,729] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-03 17:02:29,730] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,730] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-03 17:02:29,730] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,730] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-03 17:02:29,730] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,730] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-03 17:02:29,730] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,730] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-03 17:02:29,730] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,730] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-03 17:02:29,730] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,730] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-03 17:02:29,730] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,730] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-03 17:02:29,730] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,730] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-03 17:02:29,730] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,731] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-03 17:02:29,731] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,731] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-03 17:02:29,731] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,731] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-03 17:02:29,731] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,731] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-03 17:02:29,731] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,731] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-03 17:02:29,732] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,732] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-03 17:02:29,732] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,732] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-03 17:02:29,732] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,732] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-03 17:02:29,732] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,732] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-03 17:02:29,732] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,732] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-03 17:02:29,732] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,732] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-03 17:02:29,732] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,732] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-03 17:02:29,733] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,733] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-03 17:02:29,733] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,733] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-03 17:02:29,733] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,733] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-03 17:02:29,733] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,733] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-03 17:02:29,733] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,733] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-03 17:02:29,733] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,733] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-03 17:02:29,733] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,733] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-03 17:02:29,733] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,733] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-03 17:02:29,733] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,733] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-03 17:02:29,734] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,734] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-03 17:02:29,734] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,734] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-03 17:02:29,734] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,734] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-03 17:02:29,734] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,734] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-03 17:02:29,734] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,735] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-03 17:02:29,735] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,735] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-03 17:02:29,735] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,735] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-03 17:02:29,735] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,735] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-03 17:02:29,735] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,735] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-03 17:02:29,735] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,736] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-03 17:02:29,736] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,736] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-03 17:02:29,736] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,736] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-03 17:02:29,737] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,737] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-03 17:02:29,737] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,737] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-03 17:02:29,737] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,737] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-03 17:02:29,737] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,737] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-03 17:02:29,737] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,737] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 8 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,738] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,738] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,739] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,739] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,739] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,739] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,740] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 10 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,740] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,740] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,740] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,741] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 11 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,741] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,741] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,742] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 11 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,742] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,743] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,743] INFO [Broker id=1] Finished LeaderAndIsr request in 885ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) kafka | [2024-08-03 17:02:29,743] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,744] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 12 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,744] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,744] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,745] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,746] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,746] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,746] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,747] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,747] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,747] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,747] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,748] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 15 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,748] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,748] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,748] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,748] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,748] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,748] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,749] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 15 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,749] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,749] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,749] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,749] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,749] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,749] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,749] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,749] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,750] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 13 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,750] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,750] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,750] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,750] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-03 17:02:29,759] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=ELVdtwNmTj21v7GFX-sLyg, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=IAVnKXbsRTO6_G-as64MVg, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-08-03 17:02:29,771] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-08-03 17:02:29,771] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-08-03 17:02:29,772] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-08-03 17:02:29,772] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-08-03 17:02:29,772] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-08-03 17:02:29,772] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-08-03 17:02:29,772] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-08-03 17:02:29,772] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-08-03 17:02:29,772] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-08-03 17:02:29,772] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-08-03 17:02:29,772] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-08-03 17:02:29,772] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-08-03 17:02:29,773] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-08-03 17:02:29,773] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-08-03 17:02:29,773] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-08-03 17:02:29,773] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-08-03 17:02:29,773] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-08-03 17:02:29,773] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-08-03 17:02:29,773] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-08-03 17:02:29,773] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-08-03 17:02:29,773] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-08-03 17:02:29,773] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-08-03 17:02:29,773] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-08-03 17:02:29,773] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-08-03 17:02:29,773] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-08-03 17:02:29,773] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-08-03 17:02:29,773] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-08-03 17:02:29,773] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-08-03 17:02:29,773] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-08-03 17:02:29,774] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-08-03 17:02:29,774] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-08-03 17:02:29,774] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-08-03 17:02:29,774] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-08-03 17:02:29,774] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-08-03 17:02:29,774] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-08-03 17:02:29,774] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-08-03 17:02:29,774] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-08-03 17:02:29,774] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-08-03 17:02:29,774] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-08-03 17:02:29,774] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-08-03 17:02:29,774] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-08-03 17:02:29,774] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-08-03 17:02:29,774] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-08-03 17:02:29,774] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-08-03 17:02:29,774] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-08-03 17:02:29,774] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-08-03 17:02:29,774] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-08-03 17:02:29,774] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-08-03 17:02:29,774] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-08-03 17:02:29,774] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-08-03 17:02:29,774] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-08-03 17:02:29,785] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-08-03 17:02:29,789] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-08-03 17:02:29,925] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 4a63ef6b-3774-49bc-b415-e99c86982494 in Empty state. Created a new member id consumer-4a63ef6b-3774-49bc-b415-e99c86982494-3-0a2c047c-4329-4157-80c7-6aa9d777ee80 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-03 17:02:29,925] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-809af875-711e-4993-8be7-05feec725e31 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-03 17:02:29,941] INFO [GroupCoordinator 1]: Preparing to rebalance group 4a63ef6b-3774-49bc-b415-e99c86982494 in state PreparingRebalance with old generation 0 (__consumer_offsets-28) (reason: Adding new member consumer-4a63ef6b-3774-49bc-b415-e99c86982494-3-0a2c047c-4329-4157-80c7-6aa9d777ee80 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-03 17:02:29,942] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-809af875-711e-4993-8be7-05feec725e31 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-03 17:02:30,012] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 12edca8d-4cb0-4a60-9247-302b0dfb86e1 in Empty state. Created a new member id consumer-12edca8d-4cb0-4a60-9247-302b0dfb86e1-2-b436193c-6b49-414c-b3c1-1aafb716b294 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-03 17:02:30,017] INFO [GroupCoordinator 1]: Preparing to rebalance group 12edca8d-4cb0-4a60-9247-302b0dfb86e1 in state PreparingRebalance with old generation 0 (__consumer_offsets-13) (reason: Adding new member consumer-12edca8d-4cb0-4a60-9247-302b0dfb86e1-2-b436193c-6b49-414c-b3c1-1aafb716b294 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-03 17:02:32,952] INFO [GroupCoordinator 1]: Stabilized group 4a63ef6b-3774-49bc-b415-e99c86982494 generation 1 (__consumer_offsets-28) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-03 17:02:32,956] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-03 17:02:32,989] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-809af875-711e-4993-8be7-05feec725e31 for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-03 17:02:32,989] INFO [GroupCoordinator 1]: Assignment received from leader consumer-4a63ef6b-3774-49bc-b415-e99c86982494-3-0a2c047c-4329-4157-80c7-6aa9d777ee80 for group 4a63ef6b-3774-49bc-b415-e99c86982494 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-03 17:02:33,018] INFO [GroupCoordinator 1]: Stabilized group 12edca8d-4cb0-4a60-9247-302b0dfb86e1 generation 1 (__consumer_offsets-13) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-03 17:02:33,040] INFO [GroupCoordinator 1]: Assignment received from leader consumer-12edca8d-4cb0-4a60-9247-302b0dfb86e1-2-b436193c-6b49-414c-b3c1-1aafb716b294 for group 12edca8d-4cb0-4a60-9247-302b0dfb86e1 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) =================================== ======== Logs from mariadb ======== mariadb | 2024-08-03 17:01:50+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. mariadb | 2024-08-03 17:01:50+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' mariadb | 2024-08-03 17:01:50+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. mariadb | 2024-08-03 17:01:50+00:00 [Note] [Entrypoint]: Initializing database files mariadb | 2024-08-03 17:01:50 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-08-03 17:01:50 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-08-03 17:01:51 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | mariadb | mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! mariadb | To do so, start the server, then issue the following command: mariadb | mariadb | '/usr/bin/mysql_secure_installation' mariadb | mariadb | which will also give you the option of removing the test mariadb | databases and anonymous user created by default. This is mariadb | strongly recommended for production servers. mariadb | mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb mariadb | mariadb | Please report any problems at https://mariadb.org/jira mariadb | mariadb | The latest information about MariaDB is available at https://mariadb.org/. mariadb | mariadb | Consider joining MariaDB's strong and vibrant community: mariadb | https://mariadb.org/get-involved/ mariadb | mariadb | 2024-08-03 17:01:52+00:00 [Note] [Entrypoint]: Database files initialized mariadb | 2024-08-03 17:01:52+00:00 [Note] [Entrypoint]: Starting temporary server mariadb | 2024-08-03 17:01:52+00:00 [Note] [Entrypoint]: Waiting for server startup mariadb | 2024-08-03 17:01:52 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 96 ... mariadb | 2024-08-03 17:01:52 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 mariadb | 2024-08-03 17:01:52 0 [Note] InnoDB: Number of transaction pools: 1 mariadb | 2024-08-03 17:01:52 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions mariadb | 2024-08-03 17:01:52 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) mariadb | 2024-08-03 17:01:52 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-08-03 17:01:52 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-08-03 17:01:52 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB mariadb | 2024-08-03 17:01:52 0 [Note] InnoDB: Completed initialization of buffer pool mariadb | 2024-08-03 17:01:52 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) mariadb | 2024-08-03 17:01:52 0 [Note] InnoDB: 128 rollback segments are active. mariadb | 2024-08-03 17:01:52 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... mariadb | 2024-08-03 17:01:52 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. mariadb | 2024-08-03 17:01:52 0 [Note] InnoDB: log sequence number 46590; transaction id 14 mariadb | 2024-08-03 17:01:52 0 [Note] Plugin 'FEEDBACK' is disabled. mariadb | 2024-08-03 17:01:52 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | 2024-08-03 17:01:52 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. mariadb | 2024-08-03 17:01:52 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. mariadb | 2024-08-03 17:01:52 0 [Note] mariadbd: ready for connections. mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution mariadb | 2024-08-03 17:01:53+00:00 [Note] [Entrypoint]: Temporary server started. mariadb | 2024-08-03 17:01:55+00:00 [Note] [Entrypoint]: Creating user policy_user mariadb | 2024-08-03 17:01:55+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) mariadb | mariadb | 2024-08-03 17:01:55+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf mariadb | mariadb | 2024-08-03 17:01:55+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh mariadb | #!/bin/bash -xv mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. mariadb | # mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); mariadb | # you may not use this file except in compliance with the License. mariadb | # You may obtain a copy of the License at mariadb | # mariadb | # http://www.apache.org/licenses/LICENSE-2.0 mariadb | # mariadb | # Unless required by applicable law or agreed to in writing, software mariadb | # distributed under the License is distributed on an "AS IS" BASIS, mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. mariadb | # See the License for the specific language governing permissions and mariadb | # limitations under the License. mariadb | mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | do mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" mariadb | done mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp mariadb | mariadb | 2024-08-03 17:01:56+00:00 [Note] [Entrypoint]: Stopping temporary server mariadb | 2024-08-03 17:01:56 0 [Note] mariadbd (initiated by: unknown): Normal shutdown mariadb | 2024-08-03 17:01:56 0 [Note] InnoDB: FTS optimize thread exiting. mariadb | 2024-08-03 17:01:56 0 [Note] InnoDB: Starting shutdown... mariadb | 2024-08-03 17:01:56 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool mariadb | 2024-08-03 17:01:56 0 [Note] InnoDB: Buffer pool(s) dump completed at 240803 17:01:56 mariadb | 2024-08-03 17:01:56 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" mariadb | 2024-08-03 17:01:56 0 [Note] InnoDB: Shutdown completed; log sequence number 330814; transaction id 298 mariadb | 2024-08-03 17:01:56 0 [Note] mariadbd: Shutdown complete mariadb | mariadb | 2024-08-03 17:01:56+00:00 [Note] [Entrypoint]: Temporary server stopped mariadb | mariadb | 2024-08-03 17:01:56+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. mariadb | mariadb | 2024-08-03 17:01:56 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... mariadb | 2024-08-03 17:01:56 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 mariadb | 2024-08-03 17:01:56 0 [Note] InnoDB: Number of transaction pools: 1 mariadb | 2024-08-03 17:01:56 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions mariadb | 2024-08-03 17:01:56 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) mariadb | 2024-08-03 17:01:56 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-08-03 17:01:56 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-08-03 17:01:56 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB mariadb | 2024-08-03 17:01:56 0 [Note] InnoDB: Completed initialization of buffer pool mariadb | 2024-08-03 17:01:56 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) mariadb | 2024-08-03 17:01:56 0 [Note] InnoDB: 128 rollback segments are active. mariadb | 2024-08-03 17:01:56 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... mariadb | 2024-08-03 17:01:56 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. mariadb | 2024-08-03 17:01:56 0 [Note] InnoDB: log sequence number 330814; transaction id 299 mariadb | 2024-08-03 17:01:56 0 [Note] Plugin 'FEEDBACK' is disabled. mariadb | 2024-08-03 17:01:56 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool mariadb | 2024-08-03 17:01:56 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | 2024-08-03 17:01:56 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. mariadb | 2024-08-03 17:01:56 0 [Note] Server socket created on IP: '0.0.0.0'. mariadb | 2024-08-03 17:01:56 0 [Note] Server socket created on IP: '::'. mariadb | 2024-08-03 17:01:56 0 [Note] mariadbd: ready for connections. mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution mariadb | 2024-08-03 17:01:56 0 [Note] InnoDB: Buffer pool(s) load completed at 240803 17:01:56 mariadb | 2024-08-03 17:01:56 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.10' (This connection closed normally without authentication) mariadb | 2024-08-03 17:01:57 4 [Warning] Aborted connection 4 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.9' (This connection closed normally without authentication) mariadb | 2024-08-03 17:01:57 5 [Warning] Aborted connection 5 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.8' (This connection closed normally without authentication) mariadb | 2024-08-03 17:01:57 32 [Warning] Aborted connection 32 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.11' (This connection closed normally without authentication) =================================== ======== Logs from apex-pdp ======== policy-apex-pdp | Waiting for mariadb port 3306... policy-apex-pdp | mariadb (172.17.0.2:3306) open policy-apex-pdp | Waiting for kafka port 9092... policy-apex-pdp | kafka (172.17.0.6:9092) open policy-apex-pdp | Waiting for pap port 6969... policy-apex-pdp | pap (172.17.0.10:6969) open policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' policy-apex-pdp | [2024-08-03T17:02:28.952+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] policy-apex-pdp | [2024-08-03T17:02:29.174+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-apex-pdp | allow.auto.create.topics = true policy-apex-pdp | auto.commit.interval.ms = 5000 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | auto.offset.reset = latest policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | check.crcs = true policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = consumer-12edca8d-4cb0-4a60-9247-302b0dfb86e1-1 policy-apex-pdp | client.rack = policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | default.api.timeout.ms = 60000 policy-apex-pdp | enable.auto.commit = true policy-apex-pdp | exclude.internal.topics = true policy-apex-pdp | fetch.max.bytes = 52428800 policy-apex-pdp | fetch.max.wait.ms = 500 policy-apex-pdp | fetch.min.bytes = 1 policy-apex-pdp | group.id = 12edca8d-4cb0-4a60-9247-302b0dfb86e1 policy-apex-pdp | group.instance.id = null policy-apex-pdp | heartbeat.interval.ms = 3000 policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | internal.leave.group.on.close = true policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-apex-pdp | isolation.level = read_uncommitted policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | max.partition.fetch.bytes = 1048576 policy-apex-pdp | max.poll.interval.ms = 300000 policy-apex-pdp | max.poll.records = 500 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-apex-pdp | receive.buffer.bytes = 65536 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.callback.handler.class = null policy-apex-pdp | sasl.login.class = null policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-apex-pdp | sasl.login.read.timeout.ms = null policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | sasl.mechanism = GSSAPI policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | security.protocol = PLAINTEXT policy-apex-pdp | security.providers = null policy-apex-pdp | send.buffer.bytes = 131072 policy-apex-pdp | session.timeout.ms = 45000 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | ssl.cipher.suites = null policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-apex-pdp | ssl.engine.factory.class = null policy-apex-pdp | ssl.key.password = null policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | ssl.keystore.certificate.chain = null policy-apex-pdp | ssl.keystore.key = null policy-apex-pdp | ssl.keystore.location = null policy-apex-pdp | ssl.keystore.password = null policy-apex-pdp | ssl.keystore.type = JKS policy-apex-pdp | ssl.protocol = TLSv1.3 policy-apex-pdp | ssl.provider = null policy-apex-pdp | ssl.secure.random.implementation = null policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | ssl.truststore.certificates = null policy-apex-pdp | ssl.truststore.location = null policy-apex-pdp | ssl.truststore.password = null policy-apex-pdp | ssl.truststore.type = JKS policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | policy-apex-pdp | [2024-08-03T17:02:29.356+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-apex-pdp | [2024-08-03T17:02:29.356+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-apex-pdp | [2024-08-03T17:02:29.356+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1722704549354 policy-apex-pdp | [2024-08-03T17:02:29.359+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-12edca8d-4cb0-4a60-9247-302b0dfb86e1-1, groupId=12edca8d-4cb0-4a60-9247-302b0dfb86e1] Subscribed to topic(s): policy-pdp-pap policy-apex-pdp | [2024-08-03T17:02:29.372+00:00|INFO|ServiceManager|main] service manager starting policy-apex-pdp | [2024-08-03T17:02:29.372+00:00|INFO|ServiceManager|main] service manager starting topics policy-apex-pdp | [2024-08-03T17:02:29.374+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=12edca8d-4cb0-4a60-9247-302b0dfb86e1, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting policy-apex-pdp | [2024-08-03T17:02:29.396+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-apex-pdp | allow.auto.create.topics = true policy-apex-pdp | auto.commit.interval.ms = 5000 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | auto.offset.reset = latest policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | check.crcs = true policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = consumer-12edca8d-4cb0-4a60-9247-302b0dfb86e1-2 policy-apex-pdp | client.rack = policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | default.api.timeout.ms = 60000 policy-apex-pdp | enable.auto.commit = true policy-apex-pdp | exclude.internal.topics = true policy-apex-pdp | fetch.max.bytes = 52428800 policy-apex-pdp | fetch.max.wait.ms = 500 policy-apex-pdp | fetch.min.bytes = 1 policy-apex-pdp | group.id = 12edca8d-4cb0-4a60-9247-302b0dfb86e1 policy-apex-pdp | group.instance.id = null policy-apex-pdp | heartbeat.interval.ms = 3000 policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | internal.leave.group.on.close = true policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-apex-pdp | isolation.level = read_uncommitted policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | max.partition.fetch.bytes = 1048576 policy-apex-pdp | max.poll.interval.ms = 300000 policy-apex-pdp | max.poll.records = 500 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-apex-pdp | receive.buffer.bytes = 65536 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.callback.handler.class = null policy-apex-pdp | sasl.login.class = null policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-apex-pdp | sasl.login.read.timeout.ms = null policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | sasl.mechanism = GSSAPI policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | security.protocol = PLAINTEXT policy-apex-pdp | security.providers = null policy-apex-pdp | send.buffer.bytes = 131072 policy-apex-pdp | session.timeout.ms = 45000 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | ssl.cipher.suites = null policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-apex-pdp | ssl.engine.factory.class = null policy-apex-pdp | ssl.key.password = null policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | ssl.keystore.certificate.chain = null policy-apex-pdp | ssl.keystore.key = null policy-apex-pdp | ssl.keystore.location = null policy-apex-pdp | ssl.keystore.password = null policy-apex-pdp | ssl.keystore.type = JKS policy-apex-pdp | ssl.protocol = TLSv1.3 policy-apex-pdp | ssl.provider = null policy-apex-pdp | ssl.secure.random.implementation = null policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | ssl.truststore.certificates = null policy-apex-pdp | ssl.truststore.location = null policy-apex-pdp | ssl.truststore.password = null policy-apex-pdp | ssl.truststore.type = JKS policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | policy-apex-pdp | [2024-08-03T17:02:29.405+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-apex-pdp | [2024-08-03T17:02:29.405+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-apex-pdp | [2024-08-03T17:02:29.405+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1722704549405 policy-apex-pdp | [2024-08-03T17:02:29.406+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-12edca8d-4cb0-4a60-9247-302b0dfb86e1-2, groupId=12edca8d-4cb0-4a60-9247-302b0dfb86e1] Subscribed to topic(s): policy-pdp-pap policy-apex-pdp | [2024-08-03T17:02:29.407+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=04467ebc-e0d0-477c-bdd8-dfed61fc6127, alive=false, publisher=null]]: starting policy-apex-pdp | [2024-08-03T17:02:29.421+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-apex-pdp | acks = -1 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | batch.size = 16384 policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | buffer.memory = 33554432 policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = producer-1 policy-apex-pdp | compression.type = none policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | delivery.timeout.ms = 120000 policy-apex-pdp | enable.idempotence = true policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-apex-pdp | linger.ms = 0 policy-apex-pdp | max.block.ms = 60000 policy-apex-pdp | max.in.flight.requests.per.connection = 5 policy-apex-pdp | max.request.size = 1048576 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metadata.max.idle.ms = 300000 policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partitioner.adaptive.partitioning.enable = true policy-apex-pdp | partitioner.availability.timeout.ms = 0 policy-apex-pdp | partitioner.class = null policy-apex-pdp | partitioner.ignore.keys = false policy-apex-pdp | receive.buffer.bytes = 32768 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retries = 2147483647 policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.callback.handler.class = null policy-apex-pdp | sasl.login.class = null policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-apex-pdp | sasl.login.read.timeout.ms = null policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | sasl.mechanism = GSSAPI policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | security.protocol = PLAINTEXT policy-apex-pdp | security.providers = null policy-apex-pdp | send.buffer.bytes = 131072 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | ssl.cipher.suites = null policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-apex-pdp | ssl.engine.factory.class = null policy-apex-pdp | ssl.key.password = null policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | ssl.keystore.certificate.chain = null policy-apex-pdp | ssl.keystore.key = null policy-apex-pdp | ssl.keystore.location = null policy-apex-pdp | ssl.keystore.password = null policy-apex-pdp | ssl.keystore.type = JKS policy-apex-pdp | ssl.protocol = TLSv1.3 policy-apex-pdp | ssl.provider = null policy-apex-pdp | ssl.secure.random.implementation = null policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | ssl.truststore.certificates = null policy-apex-pdp | ssl.truststore.location = null policy-apex-pdp | ssl.truststore.password = null policy-apex-pdp | ssl.truststore.type = JKS policy-apex-pdp | transaction.timeout.ms = 60000 policy-apex-pdp | transactional.id = null policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-apex-pdp | policy-apex-pdp | [2024-08-03T17:02:29.432+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-apex-pdp | [2024-08-03T17:02:29.456+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-apex-pdp | [2024-08-03T17:02:29.456+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-apex-pdp | [2024-08-03T17:02:29.456+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1722704549456 policy-apex-pdp | [2024-08-03T17:02:29.457+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=04467ebc-e0d0-477c-bdd8-dfed61fc6127, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-apex-pdp | [2024-08-03T17:02:29.457+00:00|INFO|ServiceManager|main] service manager starting set alive policy-apex-pdp | [2024-08-03T17:02:29.457+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object policy-apex-pdp | [2024-08-03T17:02:29.460+00:00|INFO|ServiceManager|main] service manager starting topic sinks policy-apex-pdp | [2024-08-03T17:02:29.460+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher policy-apex-pdp | [2024-08-03T17:02:29.462+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener policy-apex-pdp | [2024-08-03T17:02:29.462+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher policy-apex-pdp | [2024-08-03T17:02:29.462+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher policy-apex-pdp | [2024-08-03T17:02:29.462+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=12edca8d-4cb0-4a60-9247-302b0dfb86e1, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@5dd91bca policy-apex-pdp | [2024-08-03T17:02:29.463+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=12edca8d-4cb0-4a60-9247-302b0dfb86e1, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted policy-apex-pdp | [2024-08-03T17:02:29.463+00:00|INFO|ServiceManager|main] service manager starting Create REST server policy-apex-pdp | [2024-08-03T17:02:29.487+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: policy-apex-pdp | [] policy-apex-pdp | [2024-08-03T17:02:29.489+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"4cad3f81-f1ac-4d54-ab17-2c63094c8fef","timestampMs":1722704549466,"name":"apex-aa659d48-0c23-492d-bd39-b9b1779ff48b","pdpGroup":"defaultGroup"} policy-apex-pdp | [2024-08-03T17:02:29.694+00:00|INFO|ServiceManager|main] service manager starting Rest Server policy-apex-pdp | [2024-08-03T17:02:29.694+00:00|INFO|ServiceManager|main] service manager starting policy-apex-pdp | [2024-08-03T17:02:29.694+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters policy-apex-pdp | [2024-08-03T17:02:29.694+00:00|INFO|JettyServletServer|main] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-70fab835==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@49d63416{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-47428937==org.glassfish.jersey.servlet.ServletContainer@abc54d74{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@3dd69f5a{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3aa3193a{/,null,STOPPED}, connector=RestServerParameters@1c98290c{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-70fab835==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@49d63416{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-47428937==org.glassfish.jersey.servlet.ServletContainer@abc54d74{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-apex-pdp | [2024-08-03T17:02:29.705+00:00|INFO|ServiceManager|main] service manager started policy-apex-pdp | [2024-08-03T17:02:29.705+00:00|INFO|ServiceManager|main] service manager started policy-apex-pdp | [2024-08-03T17:02:29.706+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. policy-apex-pdp | [2024-08-03T17:02:29.707+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-70fab835==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@49d63416{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-47428937==org.glassfish.jersey.servlet.ServletContainer@abc54d74{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@3dd69f5a{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3aa3193a{/,null,STOPPED}, connector=RestServerParameters@1c98290c{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-70fab835==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@49d63416{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-47428937==org.glassfish.jersey.servlet.ServletContainer@abc54d74{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-apex-pdp | [2024-08-03T17:02:29.963+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: Iw6ZavHnRjau9r3O-pEHnQ policy-apex-pdp | [2024-08-03T17:02:29.963+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-12edca8d-4cb0-4a60-9247-302b0dfb86e1-2, groupId=12edca8d-4cb0-4a60-9247-302b0dfb86e1] Cluster ID: Iw6ZavHnRjau9r3O-pEHnQ policy-apex-pdp | [2024-08-03T17:02:29.964+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 policy-apex-pdp | [2024-08-03T17:02:29.966+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-12edca8d-4cb0-4a60-9247-302b0dfb86e1-2, groupId=12edca8d-4cb0-4a60-9247-302b0dfb86e1] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-apex-pdp | [2024-08-03T17:02:29.983+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-12edca8d-4cb0-4a60-9247-302b0dfb86e1-2, groupId=12edca8d-4cb0-4a60-9247-302b0dfb86e1] (Re-)joining group policy-apex-pdp | [2024-08-03T17:02:30.015+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-12edca8d-4cb0-4a60-9247-302b0dfb86e1-2, groupId=12edca8d-4cb0-4a60-9247-302b0dfb86e1] Request joining group due to: need to re-join with the given member-id: consumer-12edca8d-4cb0-4a60-9247-302b0dfb86e1-2-b436193c-6b49-414c-b3c1-1aafb716b294 policy-apex-pdp | [2024-08-03T17:02:30.015+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-12edca8d-4cb0-4a60-9247-302b0dfb86e1-2, groupId=12edca8d-4cb0-4a60-9247-302b0dfb86e1] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-apex-pdp | [2024-08-03T17:02:30.016+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-12edca8d-4cb0-4a60-9247-302b0dfb86e1-2, groupId=12edca8d-4cb0-4a60-9247-302b0dfb86e1] (Re-)joining group policy-apex-pdp | [2024-08-03T17:02:30.543+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls policy-apex-pdp | [2024-08-03T17:02:30.545+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls policy-apex-pdp | [2024-08-03T17:02:33.020+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-12edca8d-4cb0-4a60-9247-302b0dfb86e1-2, groupId=12edca8d-4cb0-4a60-9247-302b0dfb86e1] Successfully joined group with generation Generation{generationId=1, memberId='consumer-12edca8d-4cb0-4a60-9247-302b0dfb86e1-2-b436193c-6b49-414c-b3c1-1aafb716b294', protocol='range'} policy-apex-pdp | [2024-08-03T17:02:33.031+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-12edca8d-4cb0-4a60-9247-302b0dfb86e1-2, groupId=12edca8d-4cb0-4a60-9247-302b0dfb86e1] Finished assignment for group at generation 1: {consumer-12edca8d-4cb0-4a60-9247-302b0dfb86e1-2-b436193c-6b49-414c-b3c1-1aafb716b294=Assignment(partitions=[policy-pdp-pap-0])} policy-apex-pdp | [2024-08-03T17:02:33.043+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-12edca8d-4cb0-4a60-9247-302b0dfb86e1-2, groupId=12edca8d-4cb0-4a60-9247-302b0dfb86e1] Successfully synced group in generation Generation{generationId=1, memberId='consumer-12edca8d-4cb0-4a60-9247-302b0dfb86e1-2-b436193c-6b49-414c-b3c1-1aafb716b294', protocol='range'} policy-apex-pdp | [2024-08-03T17:02:33.044+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-12edca8d-4cb0-4a60-9247-302b0dfb86e1-2, groupId=12edca8d-4cb0-4a60-9247-302b0dfb86e1] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-apex-pdp | [2024-08-03T17:02:33.046+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-12edca8d-4cb0-4a60-9247-302b0dfb86e1-2, groupId=12edca8d-4cb0-4a60-9247-302b0dfb86e1] Adding newly assigned partitions: policy-pdp-pap-0 policy-apex-pdp | [2024-08-03T17:02:33.059+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-12edca8d-4cb0-4a60-9247-302b0dfb86e1-2, groupId=12edca8d-4cb0-4a60-9247-302b0dfb86e1] Found no committed offset for partition policy-pdp-pap-0 policy-apex-pdp | [2024-08-03T17:02:33.095+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-12edca8d-4cb0-4a60-9247-302b0dfb86e1-2, groupId=12edca8d-4cb0-4a60-9247-302b0dfb86e1] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-apex-pdp | [2024-08-03T17:02:49.462+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"71388f22-6298-4cc6-8404-2b768064ac3a","timestampMs":1722704569462,"name":"apex-aa659d48-0c23-492d-bd39-b9b1779ff48b","pdpGroup":"defaultGroup"} policy-apex-pdp | [2024-08-03T17:02:49.480+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"71388f22-6298-4cc6-8404-2b768064ac3a","timestampMs":1722704569462,"name":"apex-aa659d48-0c23-492d-bd39-b9b1779ff48b","pdpGroup":"defaultGroup"} policy-apex-pdp | [2024-08-03T17:02:49.482+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2024-08-03T17:02:49.661+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"source":"pap-6f582890-8853-4021-af04-a5b623a0daae","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"0e917035-abf8-46e6-93e4-520ff2ebaaa6","timestampMs":1722704569592,"name":"apex-aa659d48-0c23-492d-bd39-b9b1779ff48b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-08-03T17:02:49.676+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"35b70c02-d60f-431e-9f25-11bb38c59017","timestampMs":1722704569675,"name":"apex-aa659d48-0c23-492d-bd39-b9b1779ff48b","pdpGroup":"defaultGroup"} policy-apex-pdp | [2024-08-03T17:02:49.676+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher policy-apex-pdp | [2024-08-03T17:02:49.684+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"0e917035-abf8-46e6-93e4-520ff2ebaaa6","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"77d93a64-a6d0-463f-b8f0-87613a72b7fd","timestampMs":1722704569683,"name":"apex-aa659d48-0c23-492d-bd39-b9b1779ff48b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-08-03T17:02:49.749+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"35b70c02-d60f-431e-9f25-11bb38c59017","timestampMs":1722704569675,"name":"apex-aa659d48-0c23-492d-bd39-b9b1779ff48b","pdpGroup":"defaultGroup"} policy-apex-pdp | [2024-08-03T17:02:49.749+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2024-08-03T17:02:49.762+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"0e917035-abf8-46e6-93e4-520ff2ebaaa6","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"77d93a64-a6d0-463f-b8f0-87613a72b7fd","timestampMs":1722704569683,"name":"apex-aa659d48-0c23-492d-bd39-b9b1779ff48b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-08-03T17:02:49.762+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2024-08-03T17:02:49.783+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"source":"pap-6f582890-8853-4021-af04-a5b623a0daae","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"32482d9a-ce5d-4453-9160-0ba6f821bf3b","timestampMs":1722704569594,"name":"apex-aa659d48-0c23-492d-bd39-b9b1779ff48b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-08-03T17:02:49.785+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"32482d9a-ce5d-4453-9160-0ba6f821bf3b","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"bfbedfa9-335b-4011-82d5-6fe44e215fd5","timestampMs":1722704569785,"name":"apex-aa659d48-0c23-492d-bd39-b9b1779ff48b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-08-03T17:02:49.804+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"32482d9a-ce5d-4453-9160-0ba6f821bf3b","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"bfbedfa9-335b-4011-82d5-6fe44e215fd5","timestampMs":1722704569785,"name":"apex-aa659d48-0c23-492d-bd39-b9b1779ff48b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-08-03T17:02:49.806+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2024-08-03T17:02:49.860+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"source":"pap-6f582890-8853-4021-af04-a5b623a0daae","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"77393890-0fa4-42c4-a858-79b45035bd64","timestampMs":1722704569816,"name":"apex-aa659d48-0c23-492d-bd39-b9b1779ff48b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-08-03T17:02:49.861+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"77393890-0fa4-42c4-a858-79b45035bd64","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"0c33faa8-3a3c-4039-b198-49d72f9aacdb","timestampMs":1722704569861,"name":"apex-aa659d48-0c23-492d-bd39-b9b1779ff48b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-08-03T17:02:49.875+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"77393890-0fa4-42c4-a858-79b45035bd64","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"0c33faa8-3a3c-4039-b198-49d72f9aacdb","timestampMs":1722704569861,"name":"apex-aa659d48-0c23-492d-bd39-b9b1779ff48b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-08-03T17:02:49.876+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2024-08-03T17:02:56.160+00:00|INFO|RequestLog|qtp1313960293-32] 172.17.0.4 - policyadmin [03/Aug/2024:17:02:56 +0000] "GET /metrics HTTP/1.1" 200 10651 "-" "Prometheus/2.53.1" policy-apex-pdp | [2024-08-03T17:03:56.083+00:00|INFO|RequestLog|qtp1313960293-28] 172.17.0.4 - policyadmin [03/Aug/2024:17:03:56 +0000] "GET /metrics HTTP/1.1" 200 10649 "-" "Prometheus/2.53.1" =================================== ======== Logs from api ======== policy-api | Waiting for mariadb port 3306... policy-api | mariadb (172.17.0.2:3306) open policy-api | Waiting for policy-db-migrator port 6824... policy-api | policy-db-migrator (172.17.0.8:6824) open policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml policy-api | policy-api | . ____ _ __ _ _ policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / policy-api | =========|_|==============|___/=/_/_/_/ policy-api | :: Spring Boot :: (v3.1.10) policy-api | policy-api | [2024-08-03T17:02:05.630+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final policy-api | [2024-08-03T17:02:05.705+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.11 with PID 22 (/app/api.jar started by policy in /opt/app/policy/api/bin) policy-api | [2024-08-03T17:02:05.706+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" policy-api | [2024-08-03T17:02:07.561+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-api | [2024-08-03T17:02:07.647+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 75 ms. Found 6 JPA repository interfaces. policy-api | [2024-08-03T17:02:08.098+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler policy-api | [2024-08-03T17:02:08.099+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler policy-api | [2024-08-03T17:02:08.709+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) policy-api | [2024-08-03T17:02:08.721+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-api | [2024-08-03T17:02:08.724+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-api | [2024-08-03T17:02:08.724+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.19] policy-api | [2024-08-03T17:02:08.824+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext policy-api | [2024-08-03T17:02:08.825+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3038 ms policy-api | [2024-08-03T17:02:09.217+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-api | [2024-08-03T17:02:09.281+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.2.Final policy-api | [2024-08-03T17:02:09.323+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-api | [2024-08-03T17:02:09.608+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-api | [2024-08-03T17:02:09.646+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-api | [2024-08-03T17:02:09.740+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@7718a40f policy-api | [2024-08-03T17:02:09.742+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-api | [2024-08-03T17:02:11.660+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-api | [2024-08-03T17:02:11.664+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-api | [2024-08-03T17:02:12.673+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml policy-api | [2024-08-03T17:02:13.573+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] policy-api | [2024-08-03T17:02:14.820+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-api | [2024-08-03T17:02:15.043+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@203f1447, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@9b43134, org.springframework.security.web.context.SecurityContextHolderFilter@73e505d5, org.springframework.security.web.header.HeaderWriterFilter@347b27f3, org.springframework.security.web.authentication.logout.LogoutFilter@597f9d9d, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@3c703142, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@7908e69e, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@4901ff51, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@1ae2b0d0, org.springframework.security.web.access.ExceptionTranslationFilter@6c84e4ec, org.springframework.security.web.access.intercept.AuthorizationFilter@78ad7d17] policy-api | [2024-08-03T17:02:15.845+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' policy-api | [2024-08-03T17:02:15.963+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-api | [2024-08-03T17:02:16.001+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' policy-api | [2024-08-03T17:02:16.020+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 11.11 seconds (process running for 11.821) policy-api | [2024-08-03T17:02:39.927+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-api | [2024-08-03T17:02:39.928+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' policy-api | [2024-08-03T17:02:39.930+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 2 ms policy-api | [2024-08-03T17:03:02.034+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-3] ***** OrderedServiceImpl implementers: policy-api | [] =================================== ======== Logs from csit-tests ======== policy-csit | Invoking the robot tests from: pap-test.robot pap-slas.robot policy-csit | Run Robot test policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates policy-csit | -v POLICY_API_IP:policy-api:6969 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 policy-csit | -v APEX_IP:policy-apex-pdp:6969 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 policy-csit | -v KAFKA_IP:kafka:9092 policy-csit | -v PROMETHEUS_IP:prometheus:9090 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 policy-csit | -v TEMP_FOLDER:/tmp/distribution policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 policy-csit | -v CLAMP_K8S_TEST: policy-csit | Starting Robot test suites ... policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas.Pap-Test policy-csit | ============================================================================== policy-csit | LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | LoadNodeTemplates :: Create node templates in database using speci... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Healthcheck :: Verify policy pap health check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Consolidated Healthcheck :: Verify policy consolidated health check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Metrics :: Verify policy pap is exporting prometheus metrics | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | DeployPdpGroups :: Deploy policies in PdpGroups | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Pap-Test & Pap-Slas.Pap-Test | PASS | policy-csit | 22 tests, 22 passed, 0 failed policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas.Pap-Slas policy-csit | ============================================================================== policy-csit | WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Pap-Test & Pap-Slas.Pap-Slas | PASS | policy-csit | 8 tests, 8 passed, 0 failed policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas | PASS | policy-csit | 30 tests, 30 passed, 0 failed policy-csit | ============================================================================== policy-csit | Output: /tmp/results/output.xml policy-csit | Log: /tmp/results/log.html policy-csit | Report: /tmp/results/report.html policy-csit | RESULT: 0 =================================== ======== Logs from policy-db-migrator ======== policy-db-migrator | Waiting for mariadb port 3306... policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused policy-db-migrator | Connection to mariadb (172.17.0.2) 3306 port [tcp/mysql] succeeded! policy-db-migrator | 321 blocks policy-db-migrator | Preparing upgrade release version: 0800 policy-db-migrator | Preparing upgrade release version: 0900 policy-db-migrator | Preparing upgrade release version: 1000 policy-db-migrator | Preparing upgrade release version: 1100 policy-db-migrator | Preparing upgrade release version: 1200 policy-db-migrator | Preparing upgrade release version: 1300 policy-db-migrator | Done policy-db-migrator | name version policy-db-migrator | policyadmin 0 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 policy-db-migrator | upgrade: 0 -> 1300 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0450-pdpgroup.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0460-pdppolicystatus.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0470-pdp.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0480-pdpstatistics.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0500-pdpsubgroup.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0570-toscadatatype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0580-toscadatatypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0600-toscanodetemplate.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0610-toscanodetemplates.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0630-toscanodetype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0640-toscanodetypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0660-toscaparameter.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0670-toscapolicies.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0690-toscapolicy.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0700-toscapolicytype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0710-toscapolicytypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0730-toscaproperty.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0770-toscarequirement.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0780-toscarequirements.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0820-toscatrigger.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0100-pdp.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0130-pdpstatistics.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql policy-db-migrator | -------------- policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0150-pdpstatistics.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql policy-db-migrator | -------------- policy-db-migrator | UPDATE jpapdpstatistics_enginestats a policy-db-migrator | JOIN pdpstatistics b policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp policy-db-migrator | SET a.id = b.id policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0210-sequence.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0220-sequence.sql policy-db-migrator | -------------- policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0120-toscatrigger.sql policy-db-migrator | -------------- policy-db-migrator | DROP TABLE IF EXISTS toscatrigger policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0140-toscaparameter.sql policy-db-migrator | -------------- policy-db-migrator | DROP TABLE IF EXISTS toscaparameter policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0150-toscaproperty.sql policy-db-migrator | -------------- policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | DROP TABLE IF EXISTS toscaproperty policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0100-upgrade.sql policy-db-migrator | -------------- policy-db-migrator | select 'upgrade to 1100 completed' as msg policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | msg policy-db-migrator | upgrade to 1100 completed policy-db-migrator | policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-db-migrator | -------------- policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0120-audit_sequence.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0130-statistics_sequence.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | TRUNCATE TABLE sequence policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0100-pdpstatistics.sql policy-db-migrator | -------------- policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | DROP TABLE pdpstatistics policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-db-migrator | -------------- policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0120-statistics_sequence.sql policy-db-migrator | -------------- policy-db-migrator | DROP TABLE statistics_sequence policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policyadmin: OK: upgrade (1300) policy-db-migrator | name version policy-db-migrator | policyadmin 1300 policy-db-migrator | ID script operation from_version to_version tag success atTime policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:01:57 policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:01:57 policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:01:57 policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:01:57 policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:01:57 policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:01:57 policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:01:58 policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:01:58 policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:01:58 policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:01:58 policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:01:58 policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:01:58 policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:01:58 policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:01:58 policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:01:58 policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:01:58 policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:01:58 policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:01:58 policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:01:58 policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:01:58 policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:01:58 policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:01:58 policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:01:58 policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:01:58 policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:01:58 policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:01:58 policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:01:58 policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:01:58 policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:01:58 policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:01:59 policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:01:59 policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:01:59 policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:01:59 policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:01:59 policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:01:59 policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:01:59 policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:01:59 policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:01:59 policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:01:59 policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:01:59 policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:01:59 policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:01:59 policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:01:59 policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:01:59 policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:01:59 policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:01:59 policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:01:59 policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:01:59 policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:01:59 policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:01:59 policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:01:59 policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:01:59 policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:01:59 policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:01:59 policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:02:00 policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:02:00 policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:02:00 policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:02:00 policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:02:00 policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:02:00 policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:02:00 policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:02:00 policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:02:00 policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:02:00 policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:02:00 policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:02:00 policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:02:00 policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:02:00 policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:02:00 policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:02:00 policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:02:00 policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:02:00 policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:02:00 policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:02:00 policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:02:00 policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:02:00 policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:02:00 policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:02:01 policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:02:01 policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:02:01 policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:02:01 policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:02:01 policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:02:01 policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:02:01 policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:02:01 policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:02:01 policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:02:01 policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:02:01 policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:02:01 policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:02:01 policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:02:01 policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:02:01 policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:02:01 policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:02:01 policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:02:02 policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 0308241701570800u 1 2024-08-03 17:02:02 policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 0308241701570900u 1 2024-08-03 17:02:02 policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 0308241701570900u 1 2024-08-03 17:02:02 policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 0308241701570900u 1 2024-08-03 17:02:02 policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 0308241701570900u 1 2024-08-03 17:02:02 policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 0308241701570900u 1 2024-08-03 17:02:02 policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 0308241701570900u 1 2024-08-03 17:02:02 policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 0308241701570900u 1 2024-08-03 17:02:02 policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 0308241701570900u 1 2024-08-03 17:02:02 policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 0308241701570900u 1 2024-08-03 17:02:02 policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 0308241701570900u 1 2024-08-03 17:02:02 policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 0308241701570900u 1 2024-08-03 17:02:02 policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 0308241701570900u 1 2024-08-03 17:02:02 policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 0308241701570900u 1 2024-08-03 17:02:02 policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 0308241701571000u 1 2024-08-03 17:02:02 policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 0308241701571000u 1 2024-08-03 17:02:02 policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 0308241701571000u 1 2024-08-03 17:02:02 policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 0308241701571000u 1 2024-08-03 17:02:02 policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 0308241701571000u 1 2024-08-03 17:02:02 policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 0308241701571000u 1 2024-08-03 17:02:02 policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 0308241701571000u 1 2024-08-03 17:02:03 policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 0308241701571000u 1 2024-08-03 17:02:03 policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 0308241701571000u 1 2024-08-03 17:02:03 policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 0308241701571100u 1 2024-08-03 17:02:03 policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 0308241701571200u 1 2024-08-03 17:02:03 policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 0308241701571200u 1 2024-08-03 17:02:03 policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 0308241701571200u 1 2024-08-03 17:02:03 policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 0308241701571200u 1 2024-08-03 17:02:03 policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 0308241701571300u 1 2024-08-03 17:02:03 policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 0308241701571300u 1 2024-08-03 17:02:03 policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 0308241701571300u 1 2024-08-03 17:02:03 policy-db-migrator | policyadmin: OK @ 1300 =================================== ======== Logs from pap ======== policy-pap | Waiting for mariadb port 3306... policy-pap | mariadb (172.17.0.2:3306) open policy-pap | Waiting for kafka port 9092... policy-pap | kafka (172.17.0.6:9092) open policy-pap | Waiting for api port 6969... policy-pap | api (172.17.0.9:6969) open policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json policy-pap | policy-pap | . ____ _ __ _ _ policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / policy-pap | =========|_|==============|___/=/_/_/_/ policy-pap | :: Spring Boot :: (v3.1.10) policy-pap | policy-pap | [2024-08-03T17:02:17.985+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final policy-pap | [2024-08-03T17:02:18.063+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.11 with PID 35 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) policy-pap | [2024-08-03T17:02:18.065+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" policy-pap | [2024-08-03T17:02:20.239+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-pap | [2024-08-03T17:02:20.346+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 95 ms. Found 7 JPA repository interfaces. policy-pap | [2024-08-03T17:02:20.846+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler policy-pap | [2024-08-03T17:02:20.847+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler policy-pap | [2024-08-03T17:02:21.530+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) policy-pap | [2024-08-03T17:02:21.541+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-pap | [2024-08-03T17:02:21.544+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-pap | [2024-08-03T17:02:21.544+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.19] policy-pap | [2024-08-03T17:02:21.654+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext policy-pap | [2024-08-03T17:02:21.655+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3507 ms policy-pap | [2024-08-03T17:02:22.107+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-pap | [2024-08-03T17:02:22.171+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 5.6.15.Final policy-pap | [2024-08-03T17:02:22.555+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-pap | [2024-08-03T17:02:22.661+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@72f8ae0c policy-pap | [2024-08-03T17:02:22.663+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-pap | [2024-08-03T17:02:22.695+00:00|INFO|Dialect|main] HHH000400: Using dialect: org.hibernate.dialect.MariaDB106Dialect policy-pap | [2024-08-03T17:02:24.418+00:00|INFO|JtaPlatformInitiator|main] HHH000490: Using JtaPlatform implementation: [org.hibernate.engine.transaction.jta.platform.internal.NoJtaPlatform] policy-pap | [2024-08-03T17:02:24.430+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-pap | [2024-08-03T17:02:25.002+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository policy-pap | [2024-08-03T17:02:25.429+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository policy-pap | [2024-08-03T17:02:25.549+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository policy-pap | [2024-08-03T17:02:25.823+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-4a63ef6b-3774-49bc-b415-e99c86982494-1 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = 4a63ef6b-3774-49bc-b415-e99c86982494 policy-pap | group.instance.id = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2024-08-03T17:02:26.028+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-pap | [2024-08-03T17:02:26.029+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-pap | [2024-08-03T17:02:26.029+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1722704546027 policy-pap | [2024-08-03T17:02:26.031+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-4a63ef6b-3774-49bc-b415-e99c86982494-1, groupId=4a63ef6b-3774-49bc-b415-e99c86982494] Subscribed to topic(s): policy-pdp-pap policy-pap | [2024-08-03T17:02:26.032+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-policy-pap-2 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = policy-pap policy-pap | group.instance.id = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2024-08-03T17:02:26.038+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-pap | [2024-08-03T17:02:26.038+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-pap | [2024-08-03T17:02:26.038+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1722704546038 policy-pap | [2024-08-03T17:02:26.038+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-pap | [2024-08-03T17:02:26.452+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json policy-pap | [2024-08-03T17:02:26.623+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-pap | [2024-08-03T17:02:26.880+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@27a6384b, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@29ee8174, org.springframework.security.web.context.SecurityContextHolderFilter@389bc2d3, org.springframework.security.web.header.HeaderWriterFilter@33c9f1ac, org.springframework.security.web.authentication.logout.LogoutFilter@6d92e327, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@12ebfb2d, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@23263ba, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@5acd7d1c, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@23365142, org.springframework.security.web.access.ExceptionTranslationFilter@5444f1c3, org.springframework.security.web.access.intercept.AuthorizationFilter@79ee779c] policy-pap | [2024-08-03T17:02:27.719+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' policy-pap | [2024-08-03T17:02:27.827+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-pap | [2024-08-03T17:02:27.859+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' policy-pap | [2024-08-03T17:02:27.900+00:00|INFO|ServiceManager|main] Policy PAP starting policy-pap | [2024-08-03T17:02:27.900+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry policy-pap | [2024-08-03T17:02:27.902+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters policy-pap | [2024-08-03T17:02:27.904+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener policy-pap | [2024-08-03T17:02:27.904+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher policy-pap | [2024-08-03T17:02:27.905+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher policy-pap | [2024-08-03T17:02:27.905+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher policy-pap | [2024-08-03T17:02:27.909+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=4a63ef6b-3774-49bc-b415-e99c86982494, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@6003eb60 policy-pap | [2024-08-03T17:02:27.922+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=4a63ef6b-3774-49bc-b415-e99c86982494, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2024-08-03T17:02:27.922+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-4a63ef6b-3774-49bc-b415-e99c86982494-3 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = 4a63ef6b-3774-49bc-b415-e99c86982494 policy-pap | group.instance.id = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2024-08-03T17:02:27.929+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-pap | [2024-08-03T17:02:27.930+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-pap | [2024-08-03T17:02:27.930+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1722704547929 policy-pap | [2024-08-03T17:02:27.930+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-4a63ef6b-3774-49bc-b415-e99c86982494-3, groupId=4a63ef6b-3774-49bc-b415-e99c86982494] Subscribed to topic(s): policy-pdp-pap policy-pap | [2024-08-03T17:02:27.930+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher policy-pap | [2024-08-03T17:02:27.930+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=7b16e60d-5994-4667-ab24-a6983869bce1, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@662d3e85 policy-pap | [2024-08-03T17:02:27.931+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=7b16e60d-5994-4667-ab24-a6983869bce1, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2024-08-03T17:02:27.931+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-policy-pap-4 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = policy-pap policy-pap | group.instance.id = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2024-08-03T17:02:27.935+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-pap | [2024-08-03T17:02:27.936+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-pap | [2024-08-03T17:02:27.936+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1722704547935 policy-pap | [2024-08-03T17:02:27.936+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-pap | [2024-08-03T17:02:27.937+00:00|INFO|ServiceManager|main] Policy PAP starting topics policy-pap | [2024-08-03T17:02:27.937+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=7b16e60d-5994-4667-ab24-a6983869bce1, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2024-08-03T17:02:27.937+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=4a63ef6b-3774-49bc-b415-e99c86982494, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2024-08-03T17:02:27.937+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=0581b86d-0039-4112-b5a8-4abf4faa70a6, alive=false, publisher=null]]: starting policy-pap | [2024-08-03T17:02:27.956+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-pap | acks = -1 policy-pap | auto.include.jmx.reporter = true policy-pap | batch.size = 16384 policy-pap | bootstrap.servers = [kafka:9092] policy-pap | buffer.memory = 33554432 policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = producer-1 policy-pap | compression.type = none policy-pap | connections.max.idle.ms = 540000 policy-pap | delivery.timeout.ms = 120000 policy-pap | enable.idempotence = true policy-pap | interceptor.classes = [] policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | linger.ms = 0 policy-pap | max.block.ms = 60000 policy-pap | max.in.flight.requests.per.connection = 5 policy-pap | max.request.size = 1048576 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.max.idle.ms = 300000 policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partitioner.adaptive.partitioning.enable = true policy-pap | partitioner.availability.timeout.ms = 0 policy-pap | partitioner.class = null policy-pap | partitioner.ignore.keys = false policy-pap | receive.buffer.bytes = 32768 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retries = 2147483647 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | transaction.timeout.ms = 60000 policy-pap | transactional.id = null policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | policy-pap | [2024-08-03T17:02:27.972+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-pap | [2024-08-03T17:02:27.995+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-pap | [2024-08-03T17:02:27.995+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-pap | [2024-08-03T17:02:27.995+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1722704547995 policy-pap | [2024-08-03T17:02:27.995+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=0581b86d-0039-4112-b5a8-4abf4faa70a6, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-pap | [2024-08-03T17:02:27.995+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=de4469f9-755d-403d-aa24-7a9ff369a0da, alive=false, publisher=null]]: starting policy-pap | [2024-08-03T17:02:27.997+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-pap | acks = -1 policy-pap | auto.include.jmx.reporter = true policy-pap | batch.size = 16384 policy-pap | bootstrap.servers = [kafka:9092] policy-pap | buffer.memory = 33554432 policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = producer-2 policy-pap | compression.type = none policy-pap | connections.max.idle.ms = 540000 policy-pap | delivery.timeout.ms = 120000 policy-pap | enable.idempotence = true policy-pap | interceptor.classes = [] policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | linger.ms = 0 policy-pap | max.block.ms = 60000 policy-pap | max.in.flight.requests.per.connection = 5 policy-pap | max.request.size = 1048576 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.max.idle.ms = 300000 policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partitioner.adaptive.partitioning.enable = true policy-pap | partitioner.availability.timeout.ms = 0 policy-pap | partitioner.class = null policy-pap | partitioner.ignore.keys = false policy-pap | receive.buffer.bytes = 32768 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retries = 2147483647 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | transaction.timeout.ms = 60000 policy-pap | transactional.id = null policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | policy-pap | [2024-08-03T17:02:27.998+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. policy-pap | [2024-08-03T17:02:28.005+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-pap | [2024-08-03T17:02:28.005+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-pap | [2024-08-03T17:02:28.005+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1722704548005 policy-pap | [2024-08-03T17:02:28.005+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=de4469f9-755d-403d-aa24-7a9ff369a0da, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-pap | [2024-08-03T17:02:28.005+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator policy-pap | [2024-08-03T17:02:28.005+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher policy-pap | [2024-08-03T17:02:28.010+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher policy-pap | [2024-08-03T17:02:28.011+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers policy-pap | [2024-08-03T17:02:28.014+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers policy-pap | [2024-08-03T17:02:28.014+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock policy-pap | [2024-08-03T17:02:28.015+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests policy-pap | [2024-08-03T17:02:28.016+00:00|INFO|TimerManager|Thread-10] timer manager state-change started policy-pap | [2024-08-03T17:02:28.016+00:00|INFO|TimerManager|Thread-9] timer manager update started policy-pap | [2024-08-03T17:02:28.016+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer policy-pap | [2024-08-03T17:02:28.020+00:00|INFO|ServiceManager|main] Policy PAP started policy-pap | [2024-08-03T17:02:28.022+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 10.776 seconds (process running for 11.391) policy-pap | [2024-08-03T17:02:28.590+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: Iw6ZavHnRjau9r3O-pEHnQ policy-pap | [2024-08-03T17:02:28.598+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-pap | [2024-08-03T17:02:28.599+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: Iw6ZavHnRjau9r3O-pEHnQ policy-pap | [2024-08-03T17:02:28.604+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: Iw6ZavHnRjau9r3O-pEHnQ policy-pap | [2024-08-03T17:02:28.641+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4a63ef6b-3774-49bc-b415-e99c86982494-3, groupId=4a63ef6b-3774-49bc-b415-e99c86982494] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-08-03T17:02:28.641+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4a63ef6b-3774-49bc-b415-e99c86982494-3, groupId=4a63ef6b-3774-49bc-b415-e99c86982494] Cluster ID: Iw6ZavHnRjau9r3O-pEHnQ policy-pap | [2024-08-03T17:02:28.676+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-08-03T17:02:28.731+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 0 with epoch 0 policy-pap | [2024-08-03T17:02:28.747+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 1 with epoch 0 policy-pap | [2024-08-03T17:02:28.767+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4a63ef6b-3774-49bc-b415-e99c86982494-3, groupId=4a63ef6b-3774-49bc-b415-e99c86982494] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-08-03T17:02:28.790+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-08-03T17:02:28.882+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4a63ef6b-3774-49bc-b415-e99c86982494-3, groupId=4a63ef6b-3774-49bc-b415-e99c86982494] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-08-03T17:02:28.904+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-08-03T17:02:28.990+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4a63ef6b-3774-49bc-b415-e99c86982494-3, groupId=4a63ef6b-3774-49bc-b415-e99c86982494] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-08-03T17:02:29.011+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-08-03T17:02:29.109+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4a63ef6b-3774-49bc-b415-e99c86982494-3, groupId=4a63ef6b-3774-49bc-b415-e99c86982494] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-08-03T17:02:29.125+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-08-03T17:02:29.228+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-pap | [2024-08-03T17:02:29.246+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4a63ef6b-3774-49bc-b415-e99c86982494-3, groupId=4a63ef6b-3774-49bc-b415-e99c86982494] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-08-03T17:02:29.340+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4a63ef6b-3774-49bc-b415-e99c86982494-3, groupId=4a63ef6b-3774-49bc-b415-e99c86982494] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-pap | [2024-08-03T17:02:29.342+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-08-03T17:02:29.446+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-pap | [2024-08-03T17:02:29.448+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4a63ef6b-3774-49bc-b415-e99c86982494-3, groupId=4a63ef6b-3774-49bc-b415-e99c86982494] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-08-03T17:02:29.551+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4a63ef6b-3774-49bc-b415-e99c86982494-3, groupId=4a63ef6b-3774-49bc-b415-e99c86982494] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-pap | [2024-08-03T17:02:29.554+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 20 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-08-03T17:02:29.656+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 22 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-pap | [2024-08-03T17:02:29.659+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4a63ef6b-3774-49bc-b415-e99c86982494-3, groupId=4a63ef6b-3774-49bc-b415-e99c86982494] Error while fetching metadata with correlation id 20 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-08-03T17:02:29.762+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4a63ef6b-3774-49bc-b415-e99c86982494-3, groupId=4a63ef6b-3774-49bc-b415-e99c86982494] Error while fetching metadata with correlation id 22 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-pap | [2024-08-03T17:02:29.764+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 24 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-08-03T17:02:29.874+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4a63ef6b-3774-49bc-b415-e99c86982494-3, groupId=4a63ef6b-3774-49bc-b415-e99c86982494] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-pap | [2024-08-03T17:02:29.874+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-pap | [2024-08-03T17:02:29.881+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-pap | [2024-08-03T17:02:29.886+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4a63ef6b-3774-49bc-b415-e99c86982494-3, groupId=4a63ef6b-3774-49bc-b415-e99c86982494] (Re-)joining group policy-pap | [2024-08-03T17:02:29.931+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4a63ef6b-3774-49bc-b415-e99c86982494-3, groupId=4a63ef6b-3774-49bc-b415-e99c86982494] Request joining group due to: need to re-join with the given member-id: consumer-4a63ef6b-3774-49bc-b415-e99c86982494-3-0a2c047c-4329-4157-80c7-6aa9d777ee80 policy-pap | [2024-08-03T17:02:29.931+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4a63ef6b-3774-49bc-b415-e99c86982494-3, groupId=4a63ef6b-3774-49bc-b415-e99c86982494] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-pap | [2024-08-03T17:02:29.931+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4a63ef6b-3774-49bc-b415-e99c86982494-3, groupId=4a63ef6b-3774-49bc-b415-e99c86982494] (Re-)joining group policy-pap | [2024-08-03T17:02:29.932+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-809af875-711e-4993-8be7-05feec725e31 policy-pap | [2024-08-03T17:02:29.934+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-pap | [2024-08-03T17:02:29.934+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-pap | [2024-08-03T17:02:32.956+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4a63ef6b-3774-49bc-b415-e99c86982494-3, groupId=4a63ef6b-3774-49bc-b415-e99c86982494] Successfully joined group with generation Generation{generationId=1, memberId='consumer-4a63ef6b-3774-49bc-b415-e99c86982494-3-0a2c047c-4329-4157-80c7-6aa9d777ee80', protocol='range'} policy-pap | [2024-08-03T17:02:32.957+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-809af875-711e-4993-8be7-05feec725e31', protocol='range'} policy-pap | [2024-08-03T17:02:32.974+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-809af875-711e-4993-8be7-05feec725e31=Assignment(partitions=[policy-pdp-pap-0])} policy-pap | [2024-08-03T17:02:32.974+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4a63ef6b-3774-49bc-b415-e99c86982494-3, groupId=4a63ef6b-3774-49bc-b415-e99c86982494] Finished assignment for group at generation 1: {consumer-4a63ef6b-3774-49bc-b415-e99c86982494-3-0a2c047c-4329-4157-80c7-6aa9d777ee80=Assignment(partitions=[policy-pdp-pap-0])} policy-pap | [2024-08-03T17:02:33.005+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-809af875-711e-4993-8be7-05feec725e31', protocol='range'} policy-pap | [2024-08-03T17:02:33.005+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4a63ef6b-3774-49bc-b415-e99c86982494-3, groupId=4a63ef6b-3774-49bc-b415-e99c86982494] Successfully synced group in generation Generation{generationId=1, memberId='consumer-4a63ef6b-3774-49bc-b415-e99c86982494-3-0a2c047c-4329-4157-80c7-6aa9d777ee80', protocol='range'} policy-pap | [2024-08-03T17:02:33.006+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-pap | [2024-08-03T17:02:33.006+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4a63ef6b-3774-49bc-b415-e99c86982494-3, groupId=4a63ef6b-3774-49bc-b415-e99c86982494] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-pap | [2024-08-03T17:02:33.009+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 policy-pap | [2024-08-03T17:02:33.009+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4a63ef6b-3774-49bc-b415-e99c86982494-3, groupId=4a63ef6b-3774-49bc-b415-e99c86982494] Adding newly assigned partitions: policy-pdp-pap-0 policy-pap | [2024-08-03T17:02:33.031+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4a63ef6b-3774-49bc-b415-e99c86982494-3, groupId=4a63ef6b-3774-49bc-b415-e99c86982494] Found no committed offset for partition policy-pdp-pap-0 policy-pap | [2024-08-03T17:02:33.032+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 policy-pap | [2024-08-03T17:02:33.071+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4a63ef6b-3774-49bc-b415-e99c86982494-3, groupId=4a63ef6b-3774-49bc-b415-e99c86982494] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | [2024-08-03T17:02:33.074+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | [2024-08-03T17:02:41.594+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-3] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-pap | [2024-08-03T17:02:41.594+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Initializing Servlet 'dispatcherServlet' policy-pap | [2024-08-03T17:02:41.596+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Completed initialization in 1 ms policy-pap | [2024-08-03T17:02:49.497+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-pdp-pap] ***** OrderedServiceImpl implementers: policy-pap | [] policy-pap | [2024-08-03T17:02:49.498+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"71388f22-6298-4cc6-8404-2b768064ac3a","timestampMs":1722704569462,"name":"apex-aa659d48-0c23-492d-bd39-b9b1779ff48b","pdpGroup":"defaultGroup"} policy-pap | [2024-08-03T17:02:49.498+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"71388f22-6298-4cc6-8404-2b768064ac3a","timestampMs":1722704569462,"name":"apex-aa659d48-0c23-492d-bd39-b9b1779ff48b","pdpGroup":"defaultGroup"} policy-pap | [2024-08-03T17:02:49.509+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-pap | [2024-08-03T17:02:49.617+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-aa659d48-0c23-492d-bd39-b9b1779ff48b PdpUpdate starting policy-pap | [2024-08-03T17:02:49.617+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-aa659d48-0c23-492d-bd39-b9b1779ff48b PdpUpdate starting listener policy-pap | [2024-08-03T17:02:49.617+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-aa659d48-0c23-492d-bd39-b9b1779ff48b PdpUpdate starting timer policy-pap | [2024-08-03T17:02:49.618+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=0e917035-abf8-46e6-93e4-520ff2ebaaa6, expireMs=1722704599617] policy-pap | [2024-08-03T17:02:49.620+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-aa659d48-0c23-492d-bd39-b9b1779ff48b PdpUpdate starting enqueue policy-pap | [2024-08-03T17:02:49.620+00:00|INFO|TimerManager|Thread-9] update timer waiting 29997ms Timer [name=0e917035-abf8-46e6-93e4-520ff2ebaaa6, expireMs=1722704599617] policy-pap | [2024-08-03T17:02:49.620+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-aa659d48-0c23-492d-bd39-b9b1779ff48b PdpUpdate started policy-pap | [2024-08-03T17:02:49.624+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-6f582890-8853-4021-af04-a5b623a0daae","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"0e917035-abf8-46e6-93e4-520ff2ebaaa6","timestampMs":1722704569592,"name":"apex-aa659d48-0c23-492d-bd39-b9b1779ff48b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-08-03T17:02:49.676+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-6f582890-8853-4021-af04-a5b623a0daae","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"0e917035-abf8-46e6-93e4-520ff2ebaaa6","timestampMs":1722704569592,"name":"apex-aa659d48-0c23-492d-bd39-b9b1779ff48b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-08-03T17:02:49.676+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-6f582890-8853-4021-af04-a5b623a0daae","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"0e917035-abf8-46e6-93e4-520ff2ebaaa6","timestampMs":1722704569592,"name":"apex-aa659d48-0c23-492d-bd39-b9b1779ff48b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-08-03T17:02:49.676+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2024-08-03T17:02:49.677+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2024-08-03T17:02:49.688+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"35b70c02-d60f-431e-9f25-11bb38c59017","timestampMs":1722704569675,"name":"apex-aa659d48-0c23-492d-bd39-b9b1779ff48b","pdpGroup":"defaultGroup"} policy-pap | [2024-08-03T17:02:49.689+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-pap | [2024-08-03T17:02:49.727+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"0e917035-abf8-46e6-93e4-520ff2ebaaa6","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"77d93a64-a6d0-463f-b8f0-87613a72b7fd","timestampMs":1722704569683,"name":"apex-aa659d48-0c23-492d-bd39-b9b1779ff48b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-08-03T17:02:49.728+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-aa659d48-0c23-492d-bd39-b9b1779ff48b PdpUpdate stopping policy-pap | [2024-08-03T17:02:49.728+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-aa659d48-0c23-492d-bd39-b9b1779ff48b PdpUpdate stopping enqueue policy-pap | [2024-08-03T17:02:49.728+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-aa659d48-0c23-492d-bd39-b9b1779ff48b PdpUpdate stopping timer policy-pap | [2024-08-03T17:02:49.728+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=0e917035-abf8-46e6-93e4-520ff2ebaaa6, expireMs=1722704599617] policy-pap | [2024-08-03T17:02:49.728+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-aa659d48-0c23-492d-bd39-b9b1779ff48b PdpUpdate stopping listener policy-pap | [2024-08-03T17:02:49.728+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-aa659d48-0c23-492d-bd39-b9b1779ff48b PdpUpdate stopped policy-pap | [2024-08-03T17:02:49.730+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"35b70c02-d60f-431e-9f25-11bb38c59017","timestampMs":1722704569675,"name":"apex-aa659d48-0c23-492d-bd39-b9b1779ff48b","pdpGroup":"defaultGroup"} policy-pap | [2024-08-03T17:02:49.765+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-aa659d48-0c23-492d-bd39-b9b1779ff48b PdpUpdate successful policy-pap | [2024-08-03T17:02:49.765+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-aa659d48-0c23-492d-bd39-b9b1779ff48b start publishing next request policy-pap | [2024-08-03T17:02:49.765+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-aa659d48-0c23-492d-bd39-b9b1779ff48b PdpStateChange starting policy-pap | [2024-08-03T17:02:49.769+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-aa659d48-0c23-492d-bd39-b9b1779ff48b PdpStateChange starting listener policy-pap | [2024-08-03T17:02:49.769+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-aa659d48-0c23-492d-bd39-b9b1779ff48b PdpStateChange starting timer policy-pap | [2024-08-03T17:02:49.769+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=32482d9a-ce5d-4453-9160-0ba6f821bf3b, expireMs=1722704599769] policy-pap | [2024-08-03T17:02:49.769+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 30000ms Timer [name=32482d9a-ce5d-4453-9160-0ba6f821bf3b, expireMs=1722704599769] policy-pap | [2024-08-03T17:02:49.770+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-aa659d48-0c23-492d-bd39-b9b1779ff48b PdpStateChange starting enqueue policy-pap | [2024-08-03T17:02:49.770+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-aa659d48-0c23-492d-bd39-b9b1779ff48b PdpStateChange started policy-pap | [2024-08-03T17:02:49.771+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-6f582890-8853-4021-af04-a5b623a0daae","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"32482d9a-ce5d-4453-9160-0ba6f821bf3b","timestampMs":1722704569594,"name":"apex-aa659d48-0c23-492d-bd39-b9b1779ff48b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-08-03T17:02:49.833+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-6f582890-8853-4021-af04-a5b623a0daae","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"32482d9a-ce5d-4453-9160-0ba6f821bf3b","timestampMs":1722704569594,"name":"apex-aa659d48-0c23-492d-bd39-b9b1779ff48b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-08-03T17:02:49.834+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE policy-pap | [2024-08-03T17:02:49.837+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"32482d9a-ce5d-4453-9160-0ba6f821bf3b","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"bfbedfa9-335b-4011-82d5-6fe44e215fd5","timestampMs":1722704569785,"name":"apex-aa659d48-0c23-492d-bd39-b9b1779ff48b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-08-03T17:02:49.845+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-aa659d48-0c23-492d-bd39-b9b1779ff48b PdpStateChange stopping policy-pap | [2024-08-03T17:02:49.846+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-aa659d48-0c23-492d-bd39-b9b1779ff48b PdpStateChange stopping enqueue policy-pap | [2024-08-03T17:02:49.846+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-aa659d48-0c23-492d-bd39-b9b1779ff48b PdpStateChange stopping timer policy-pap | [2024-08-03T17:02:49.846+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=32482d9a-ce5d-4453-9160-0ba6f821bf3b, expireMs=1722704599769] policy-pap | [2024-08-03T17:02:49.846+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-aa659d48-0c23-492d-bd39-b9b1779ff48b PdpStateChange stopping listener policy-pap | [2024-08-03T17:02:49.846+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-aa659d48-0c23-492d-bd39-b9b1779ff48b PdpStateChange stopped policy-pap | [2024-08-03T17:02:49.846+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-aa659d48-0c23-492d-bd39-b9b1779ff48b PdpStateChange successful policy-pap | [2024-08-03T17:02:49.846+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-aa659d48-0c23-492d-bd39-b9b1779ff48b start publishing next request policy-pap | [2024-08-03T17:02:49.846+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-aa659d48-0c23-492d-bd39-b9b1779ff48b PdpUpdate starting policy-pap | [2024-08-03T17:02:49.846+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-aa659d48-0c23-492d-bd39-b9b1779ff48b PdpUpdate starting listener policy-pap | [2024-08-03T17:02:49.846+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-aa659d48-0c23-492d-bd39-b9b1779ff48b PdpUpdate starting timer policy-pap | [2024-08-03T17:02:49.847+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=77393890-0fa4-42c4-a858-79b45035bd64, expireMs=1722704599847] policy-pap | [2024-08-03T17:02:49.847+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-aa659d48-0c23-492d-bd39-b9b1779ff48b PdpUpdate starting enqueue policy-pap | [2024-08-03T17:02:49.847+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-aa659d48-0c23-492d-bd39-b9b1779ff48b PdpUpdate started policy-pap | [2024-08-03T17:02:49.849+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-6f582890-8853-4021-af04-a5b623a0daae","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"77393890-0fa4-42c4-a858-79b45035bd64","timestampMs":1722704569816,"name":"apex-aa659d48-0c23-492d-bd39-b9b1779ff48b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-08-03T17:02:49.849+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"0e917035-abf8-46e6-93e4-520ff2ebaaa6","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"77d93a64-a6d0-463f-b8f0-87613a72b7fd","timestampMs":1722704569683,"name":"apex-aa659d48-0c23-492d-bd39-b9b1779ff48b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-08-03T17:02:49.850+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 0e917035-abf8-46e6-93e4-520ff2ebaaa6 policy-pap | [2024-08-03T17:02:49.856+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-6f582890-8853-4021-af04-a5b623a0daae","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"32482d9a-ce5d-4453-9160-0ba6f821bf3b","timestampMs":1722704569594,"name":"apex-aa659d48-0c23-492d-bd39-b9b1779ff48b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-08-03T17:02:49.856+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE policy-pap | [2024-08-03T17:02:49.856+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"32482d9a-ce5d-4453-9160-0ba6f821bf3b","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"bfbedfa9-335b-4011-82d5-6fe44e215fd5","timestampMs":1722704569785,"name":"apex-aa659d48-0c23-492d-bd39-b9b1779ff48b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-08-03T17:02:49.856+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 32482d9a-ce5d-4453-9160-0ba6f821bf3b policy-pap | [2024-08-03T17:02:49.859+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-6f582890-8853-4021-af04-a5b623a0daae","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"77393890-0fa4-42c4-a858-79b45035bd64","timestampMs":1722704569816,"name":"apex-aa659d48-0c23-492d-bd39-b9b1779ff48b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-08-03T17:02:49.859+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2024-08-03T17:02:49.865+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-6f582890-8853-4021-af04-a5b623a0daae","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"77393890-0fa4-42c4-a858-79b45035bd64","timestampMs":1722704569816,"name":"apex-aa659d48-0c23-492d-bd39-b9b1779ff48b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-08-03T17:02:49.865+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2024-08-03T17:02:49.871+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"77393890-0fa4-42c4-a858-79b45035bd64","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"0c33faa8-3a3c-4039-b198-49d72f9aacdb","timestampMs":1722704569861,"name":"apex-aa659d48-0c23-492d-bd39-b9b1779ff48b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-08-03T17:02:49.872+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-aa659d48-0c23-492d-bd39-b9b1779ff48b PdpUpdate stopping policy-pap | [2024-08-03T17:02:49.873+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-aa659d48-0c23-492d-bd39-b9b1779ff48b PdpUpdate stopping enqueue policy-pap | [2024-08-03T17:02:49.873+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-aa659d48-0c23-492d-bd39-b9b1779ff48b PdpUpdate stopping timer policy-pap | [2024-08-03T17:02:49.873+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=77393890-0fa4-42c4-a858-79b45035bd64, expireMs=1722704599847] policy-pap | [2024-08-03T17:02:49.874+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-aa659d48-0c23-492d-bd39-b9b1779ff48b PdpUpdate stopping listener policy-pap | [2024-08-03T17:02:49.874+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-aa659d48-0c23-492d-bd39-b9b1779ff48b PdpUpdate stopped policy-pap | [2024-08-03T17:02:49.874+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"77393890-0fa4-42c4-a858-79b45035bd64","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"0c33faa8-3a3c-4039-b198-49d72f9aacdb","timestampMs":1722704569861,"name":"apex-aa659d48-0c23-492d-bd39-b9b1779ff48b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-08-03T17:02:49.874+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 77393890-0fa4-42c4-a858-79b45035bd64 policy-pap | [2024-08-03T17:02:49.878+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-aa659d48-0c23-492d-bd39-b9b1779ff48b PdpUpdate successful policy-pap | [2024-08-03T17:02:49.878+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-aa659d48-0c23-492d-bd39-b9b1779ff48b has no more requests policy-pap | [2024-08-03T17:03:19.618+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=0e917035-abf8-46e6-93e4-520ff2ebaaa6, expireMs=1722704599617] policy-pap | [2024-08-03T17:03:19.770+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=32482d9a-ce5d-4453-9160-0ba6f821bf3b, expireMs=1722704599769] policy-pap | [2024-08-03T17:03:23.838+00:00|WARN|NonInjectionManager|pool-2-thread-1] Falling back to injection-less client. policy-pap | [2024-08-03T17:03:23.880+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls policy-pap | [2024-08-03T17:03:23.891+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls policy-pap | [2024-08-03T17:03:23.892+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls policy-pap | [2024-08-03T17:03:24.290+00:00|INFO|SessionData|http-nio-6969-exec-7] unknown group testGroup policy-pap | [2024-08-03T17:03:24.814+00:00|INFO|SessionData|http-nio-6969-exec-7] create cached group testGroup policy-pap | [2024-08-03T17:03:24.815+00:00|INFO|SessionData|http-nio-6969-exec-7] creating DB group testGroup policy-pap | [2024-08-03T17:03:25.343+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup policy-pap | [2024-08-03T17:03:25.565+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] Registering a deploy for policy onap.restart.tca 1.0.0 policy-pap | [2024-08-03T17:03:25.673+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] Registering a deploy for policy operational.apex.decisionMaker 1.0.0 policy-pap | [2024-08-03T17:03:25.673+00:00|INFO|SessionData|http-nio-6969-exec-1] update cached group testGroup policy-pap | [2024-08-03T17:03:25.674+00:00|INFO|SessionData|http-nio-6969-exec-1] updating DB group testGroup policy-pap | [2024-08-03T17:03:25.689+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2024-08-03T17:03:25Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=DEPLOYMENT, timestamp=2024-08-03T17:03:25Z, user=policyadmin)] policy-pap | [2024-08-03T17:03:26.360+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group testGroup policy-pap | [2024-08-03T17:03:26.361+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-5] remove policy onap.restart.tca 1.0.0 from subgroup testGroup pdpTypeA count=0 policy-pap | [2024-08-03T17:03:26.362+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] Registering an undeploy for policy onap.restart.tca 1.0.0 policy-pap | [2024-08-03T17:03:26.362+00:00|INFO|SessionData|http-nio-6969-exec-5] update cached group testGroup policy-pap | [2024-08-03T17:03:26.362+00:00|INFO|SessionData|http-nio-6969-exec-5] updating DB group testGroup policy-pap | [2024-08-03T17:03:26.394+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2024-08-03T17:03:26Z, user=policyadmin)] policy-pap | [2024-08-03T17:03:26.744+00:00|INFO|SessionData|http-nio-6969-exec-6] cache group defaultGroup policy-pap | [2024-08-03T17:03:26.745+00:00|INFO|SessionData|http-nio-6969-exec-6] cache group testGroup policy-pap | [2024-08-03T17:03:26.745+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-6] remove policy operational.apex.decisionMaker 1.0.0 from subgroup testGroup pdpTypeC count=0 policy-pap | [2024-08-03T17:03:26.745+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] Registering an undeploy for policy operational.apex.decisionMaker 1.0.0 policy-pap | [2024-08-03T17:03:26.745+00:00|INFO|SessionData|http-nio-6969-exec-6] update cached group testGroup policy-pap | [2024-08-03T17:03:26.745+00:00|INFO|SessionData|http-nio-6969-exec-6] updating DB group testGroup policy-pap | [2024-08-03T17:03:26.758+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=UNDEPLOYMENT, timestamp=2024-08-03T17:03:26Z, user=policyadmin)] policy-pap | [2024-08-03T17:03:27.342+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup policy-pap | [2024-08-03T17:03:27.343+00:00|INFO|SessionData|http-nio-6969-exec-1] deleting DB group testGroup policy-pap | [2024-08-03T17:04:28.017+00:00|INFO|PdpModifyRequestMap|pool-3-thread-1] check for PDP records older than 360000ms =================================== ======== Logs from prometheus ======== prometheus | ts=2024-08-03T17:01:48.553Z caller=main.go:589 level=info msg="No time or size retention was set so using the default time retention" duration=15d prometheus | ts=2024-08-03T17:01:48.553Z caller=main.go:633 level=info msg="Starting Prometheus Server" mode=server version="(version=2.53.1, branch=HEAD, revision=14cfec3f6048b735e08c1e9c64c8d4211d32bab4)" prometheus | ts=2024-08-03T17:01:48.553Z caller=main.go:638 level=info build_context="(go=go1.22.5, platform=linux/amd64, user=root@9f8e5b6970da, date=20240710-10:16:27, tags=netgo,builtinassets,stringlabels)" prometheus | ts=2024-08-03T17:01:48.553Z caller=main.go:639 level=info host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" prometheus | ts=2024-08-03T17:01:48.553Z caller=main.go:640 level=info fd_limits="(soft=1048576, hard=1048576)" prometheus | ts=2024-08-03T17:01:48.553Z caller=main.go:641 level=info vm_limits="(soft=unlimited, hard=unlimited)" prometheus | ts=2024-08-03T17:01:48.557Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090 prometheus | ts=2024-08-03T17:01:48.557Z caller=main.go:1148 level=info msg="Starting TSDB ..." prometheus | ts=2024-08-03T17:01:48.559Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9090 prometheus | ts=2024-08-03T17:01:48.559Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9090 prometheus | ts=2024-08-03T17:01:48.561Z caller=head.go:626 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" prometheus | ts=2024-08-03T17:01:48.561Z caller=head.go:713 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=1.87µs prometheus | ts=2024-08-03T17:01:48.561Z caller=head.go:721 level=info component=tsdb msg="Replaying WAL, this may take a while" prometheus | ts=2024-08-03T17:01:48.562Z caller=head.go:793 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 prometheus | ts=2024-08-03T17:01:48.562Z caller=head.go:830 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=25.85µs wal_replay_duration=313.402µs wbl_replay_duration=310ns chunk_snapshot_load_duration=0s mmap_chunk_replay_duration=1.87µs total_replay_duration=369.643µs prometheus | ts=2024-08-03T17:01:48.566Z caller=main.go:1169 level=info fs_type=EXT4_SUPER_MAGIC prometheus | ts=2024-08-03T17:01:48.566Z caller=main.go:1172 level=info msg="TSDB started" prometheus | ts=2024-08-03T17:01:48.566Z caller=main.go:1354 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml prometheus | ts=2024-08-03T17:01:48.571Z caller=main.go:1391 level=info msg="updated GOGC" old=100 new=75 prometheus | ts=2024-08-03T17:01:48.571Z caller=main.go:1402 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=5.129152ms db_storage=1.93µs remote_storage=2.91µs web_handler=1.23µs query_engine=2.06µs scrape=4.168716ms scrape_sd=147.301µs notify=35.91µs notify_sd=14.4µs rules=2.84µs tracing=8.601µs prometheus | ts=2024-08-03T17:01:48.571Z caller=main.go:1133 level=info msg="Server is ready to receive web requests." prometheus | ts=2024-08-03T17:01:48.571Z caller=manager.go:164 level=info component="rule manager" msg="Starting rule manager..." =================================== ======== Logs from simulator ======== simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json simulator | overriding logback.xml simulator | 2024-08-03 17:01:48,069 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json simulator | 2024-08-03 17:01:48,128 INFO org.onap.policy.models.simulators starting simulator | 2024-08-03 17:01:48,129 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties simulator | 2024-08-03 17:01:48,322 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION simulator | 2024-08-03 17:01:48,323 INFO org.onap.policy.models.simulators starting A&AI simulator simulator | 2024-08-03 17:01:48,435 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START simulator | 2024-08-03 17:01:48,447 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-08-03 17:01:48,450 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-08-03 17:01:48,457 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 simulator | 2024-08-03 17:01:48,532 INFO Session workerName=node0 simulator | 2024-08-03 17:01:49,087 INFO Using GSON for REST calls simulator | 2024-08-03 17:01:49,158 INFO Started o.e.j.s.ServletContextHandler@3f6db3fb{/,null,AVAILABLE} simulator | 2024-08-03 17:01:49,167 INFO Started A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} simulator | 2024-08-03 17:01:49,176 INFO Started Server@64a8c844{STARTING}[11.0.20,sto=0] @1567ms simulator | 2024-08-03 17:01:49,176 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,AVAILABLE}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4274 ms. simulator | 2024-08-03 17:01:49,184 INFO org.onap.policy.models.simulators starting SDNC simulator simulator | 2024-08-03 17:01:49,189 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START simulator | 2024-08-03 17:01:49,189 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-08-03 17:01:49,190 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-08-03 17:01:49,190 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 simulator | 2024-08-03 17:01:49,211 INFO Session workerName=node0 simulator | 2024-08-03 17:01:49,269 INFO Using GSON for REST calls simulator | 2024-08-03 17:01:49,278 INFO Started o.e.j.s.ServletContextHandler@b70da4c{/,null,AVAILABLE} simulator | 2024-08-03 17:01:49,280 INFO Started SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} simulator | 2024-08-03 17:01:49,280 INFO Started Server@70efb718{STARTING}[11.0.20,sto=0] @1671ms simulator | 2024-08-03 17:01:49,280 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,AVAILABLE}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4910 ms. simulator | 2024-08-03 17:01:49,281 INFO org.onap.policy.models.simulators starting SO simulator simulator | 2024-08-03 17:01:49,284 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START simulator | 2024-08-03 17:01:49,284 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-08-03 17:01:49,285 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-08-03 17:01:49,285 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 simulator | 2024-08-03 17:01:49,288 INFO Session workerName=node0 simulator | 2024-08-03 17:01:49,340 INFO Using GSON for REST calls simulator | 2024-08-03 17:01:49,353 INFO Started o.e.j.s.ServletContextHandler@696f0212{/,null,AVAILABLE} simulator | 2024-08-03 17:01:49,355 INFO Started SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} simulator | 2024-08-03 17:01:49,355 INFO Started Server@b7838a9{STARTING}[11.0.20,sto=0] @1746ms simulator | 2024-08-03 17:01:49,355 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,AVAILABLE}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4930 ms. simulator | 2024-08-03 17:01:49,356 INFO org.onap.policy.models.simulators starting VFC simulator simulator | 2024-08-03 17:01:49,358 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START simulator | 2024-08-03 17:01:49,358 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-08-03 17:01:49,359 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-08-03 17:01:49,360 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 simulator | 2024-08-03 17:01:49,363 INFO Session workerName=node0 simulator | 2024-08-03 17:01:49,420 INFO Using GSON for REST calls simulator | 2024-08-03 17:01:49,430 INFO Started o.e.j.s.ServletContextHandler@19553973{/,null,AVAILABLE} simulator | 2024-08-03 17:01:49,431 INFO Started VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670} simulator | 2024-08-03 17:01:49,431 INFO Started Server@f478a81{STARTING}[11.0.20,sto=0] @1823ms simulator | 2024-08-03 17:01:49,432 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,AVAILABLE}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4927 ms. simulator | 2024-08-03 17:01:49,433 INFO org.onap.policy.models.simulators started =================================== ======== Logs from zookeeper ======== zookeeper | ===> User zookeeper | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) zookeeper | ===> Configuring ... zookeeper | ===> Running preflight checks ... zookeeper | ===> Check if /var/lib/zookeeper/data is writable ... zookeeper | ===> Check if /var/lib/zookeeper/log is writable ... zookeeper | ===> Launching ... zookeeper | ===> Launching zookeeper ... zookeeper | [2024-08-03 17:01:49,832] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-08-03 17:01:49,835] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-08-03 17:01:49,835] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-08-03 17:01:49,835] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-08-03 17:01:49,835] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-08-03 17:01:49,836] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2024-08-03 17:01:49,836] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2024-08-03 17:01:49,836] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2024-08-03 17:01:49,836] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) zookeeper | [2024-08-03 17:01:49,838] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) zookeeper | [2024-08-03 17:01:49,838] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-08-03 17:01:49,838] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-08-03 17:01:49,838] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-08-03 17:01:49,838] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-08-03 17:01:49,839] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-08-03 17:01:49,839] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) zookeeper | [2024-08-03 17:01:49,848] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@75c072cb (org.apache.zookeeper.server.ServerMetrics) zookeeper | [2024-08-03 17:01:49,851] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper | [2024-08-03 17:01:49,851] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper | [2024-08-03 17:01:49,855] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2024-08-03 17:01:49,862] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-03 17:01:49,862] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-03 17:01:49,862] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-03 17:01:49,863] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-03 17:01:49,863] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-03 17:01:49,863] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-03 17:01:49,863] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-03 17:01:49,863] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-03 17:01:49,863] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-03 17:01:49,863] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-03 17:01:49,864] INFO Server environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-03 17:01:49,864] INFO Server environment:host.name=zookeeper (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-03 17:01:49,864] INFO Server environment:java.version=17.0.12 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-03 17:01:49,864] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-03 17:01:49,864] INFO Server environment:java.home=/usr/lib/jvm/java-17-zulu-openjdk-ca (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-03 17:01:49,864] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/connect-transforms-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/protobuf-java-3.23.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-mirror-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/netty-common-4.1.110.Final.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.16.2.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.110.Final.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/connect-api-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.16.2.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.6-3.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.16.2.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.110.Final.jar:/usr/bin/../share/java/kafka/connect-runtime-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/kafka-shell-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.110.Final.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/scala-library-2.13.12.jar:/usr/bin/../share/java/kafka/trogdor-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.110.Final.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.110.Final.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.110.Final.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.110.Final.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.110.Final.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.12.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/kafka-raft-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-tools-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-server-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-databind-2.16.2.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/opentelemetry-proto-1.0.0-alpha.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.16.2.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.16.2.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-core-2.16.2.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/kafka-clients-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/connect-json-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-03 17:01:49,865] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-03 17:01:49,865] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-03 17:01:49,865] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-03 17:01:49,865] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-03 17:01:49,865] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-03 17:01:49,865] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-03 17:01:49,865] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-03 17:01:49,865] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-03 17:01:49,865] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-03 17:01:49,866] INFO Server environment:os.memory.free=495MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-03 17:01:49,866] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-03 17:01:49,866] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-03 17:01:49,866] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-03 17:01:49,866] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-03 17:01:49,866] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-03 17:01:49,866] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-03 17:01:49,866] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-03 17:01:49,866] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-03 17:01:49,866] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-03 17:01:49,867] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) zookeeper | [2024-08-03 17:01:49,868] INFO minSessionTimeout set to 6000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-03 17:01:49,868] INFO maxSessionTimeout set to 60000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-03 17:01:49,869] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper | [2024-08-03 17:01:49,869] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper | [2024-08-03 17:01:49,870] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2024-08-03 17:01:49,870] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2024-08-03 17:01:49,870] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2024-08-03 17:01:49,870] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2024-08-03 17:01:49,871] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2024-08-03 17:01:49,871] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2024-08-03 17:01:49,872] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-03 17:01:49,873] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-03 17:01:49,873] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) zookeeper | [2024-08-03 17:01:49,873] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) zookeeper | [2024-08-03 17:01:49,873] INFO Created server with tickTime 3000 ms minSessionTimeout 6000 ms maxSessionTimeout 60000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-03 17:01:49,893] INFO Logging initialized @383ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) zookeeper | [2024-08-03 17:01:49,942] WARN o.e.j.s.ServletContextHandler@f5958c9{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2024-08-03 17:01:49,942] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2024-08-03 17:01:49,956] INFO jetty-9.4.54.v20240208; built: 2024-02-08T19:42:39.027Z; git: cef3fbd6d736a21e7d541a5db490381d95a2047d; jvm 17.0.12+7-LTS (org.eclipse.jetty.server.Server) zookeeper | [2024-08-03 17:01:49,975] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) zookeeper | [2024-08-03 17:01:49,976] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) zookeeper | [2024-08-03 17:01:49,976] INFO node0 Scavenging every 660000ms (org.eclipse.jetty.server.session) zookeeper | [2024-08-03 17:01:49,979] WARN ServletContext@o.e.j.s.ServletContextHandler@f5958c9{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) zookeeper | [2024-08-03 17:01:49,997] INFO Started o.e.j.s.ServletContextHandler@f5958c9{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2024-08-03 17:01:50,008] INFO Started ServerConnector@436813f3{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) zookeeper | [2024-08-03 17:01:50,008] INFO Started @502ms (org.eclipse.jetty.server.Server) zookeeper | [2024-08-03 17:01:50,008] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) zookeeper | [2024-08-03 17:01:50,012] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper | [2024-08-03 17:01:50,013] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper | [2024-08-03 17:01:50,014] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper | [2024-08-03 17:01:50,016] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper | [2024-08-03 17:01:50,025] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper | [2024-08-03 17:01:50,025] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper | [2024-08-03 17:01:50,026] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2024-08-03 17:01:50,026] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2024-08-03 17:01:50,029] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) zookeeper | [2024-08-03 17:01:50,029] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2024-08-03 17:01:50,032] INFO Snapshot loaded in 6 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2024-08-03 17:01:50,033] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2024-08-03 17:01:50,033] INFO Snapshot taken in 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-03 17:01:50,039] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) zookeeper | [2024-08-03 17:01:50,039] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) zookeeper | [2024-08-03 17:01:50,050] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) zookeeper | [2024-08-03 17:01:50,051] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) zookeeper | [2024-08-03 17:01:51,272] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) =================================== Tearing down containers... Container policy-csit Stopping Container policy-apex-pdp Stopping Container grafana Stopping Container policy-csit Stopped Container policy-csit Removing Container policy-csit Removed Container grafana Stopped Container grafana Removing Container grafana Removed Container prometheus Stopping Container prometheus Stopped Container prometheus Removing Container prometheus Removed Container policy-apex-pdp Stopped Container policy-apex-pdp Removing Container policy-apex-pdp Removed Container policy-pap Stopping Container simulator Stopping Container simulator Stopped Container simulator Removing Container simulator Removed Container policy-pap Stopped Container policy-pap Removing Container policy-pap Removed Container policy-api Stopping Container kafka Stopping Container kafka Stopped Container kafka Removing Container kafka Removed Container zookeeper Stopping Container zookeeper Stopped Container zookeeper Removing Container zookeeper Removed Container policy-api Stopped Container policy-api Removing Container policy-api Removed Container policy-db-migrator Stopping Container policy-db-migrator Stopped Container policy-db-migrator Removing Container policy-db-migrator Removed Container mariadb Stopping Container mariadb Stopped Container mariadb Removing Container mariadb Removed Network compose_default Removing Network compose_default Removed $ ssh-agent -k unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 2054 killed; [ssh-agent] Stopped. Robot results publisher started... INFO: Checking test criticality is deprecated and will be dropped in a future release! -Parsing output xml: Done! -Copying log files to build dir: Done! -Assigning results to build: Done! -Checking thresholds: Done! Done publishing Robot results. [PostBuildScript] - [INFO] Executing post build scripts. [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins9751151200982710392.sh ---> sysstat.sh [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins18027520676550918849.sh ---> package-listing.sh ++ facter osfamily ++ tr '[:upper:]' '[:lower:]' + OS_FAMILY=debian + workspace=/w/workspace/policy-pap-newdelhi-project-csit-pap + START_PACKAGES=/tmp/packages_start.txt + END_PACKAGES=/tmp/packages_end.txt + DIFF_PACKAGES=/tmp/packages_diff.txt + PACKAGES=/tmp/packages_start.txt + '[' /w/workspace/policy-pap-newdelhi-project-csit-pap ']' + PACKAGES=/tmp/packages_end.txt + case "${OS_FAMILY}" in + dpkg -l + grep '^ii' + '[' -f /tmp/packages_start.txt ']' + '[' -f /tmp/packages_end.txt ']' + diff /tmp/packages_start.txt /tmp/packages_end.txt + '[' /w/workspace/policy-pap-newdelhi-project-csit-pap ']' + mkdir -p /w/workspace/policy-pap-newdelhi-project-csit-pap/archives/ + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-pap-newdelhi-project-csit-pap/archives/ [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins17149057158645763095.sh ---> capture-instance-metadata.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-newdelhi-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-4v0z from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-4v0z/bin to PATH INFO: Running in OpenStack, capturing instance metadata [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins12501786707954577269.sh provisioning config files... copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-pap-newdelhi-project-csit-pap@tmp/config1756837954395266830tmp Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] Run condition [Regular expression match] preventing perform for step [Provide Configuration files] [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SERVER_ID=logs [EnvInject] - Variables injected successfully. [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins17524554365282423838.sh ---> create-netrc.sh [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins5379734816948524435.sh ---> python-tools-install.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-newdelhi-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-4v0z from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-4v0z/bin to PATH [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins18229928239275434700.sh ---> sudo-logs.sh Archiving 'sudo' log.. [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins1152034765597807663.sh ---> job-cost.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-newdelhi-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-4v0z from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 lf-activate-venv(): INFO: Adding /tmp/venv-4v0z/bin to PATH INFO: No Stack... INFO: Retrieving Pricing Info for: v3-standard-8 INFO: Archiving Costs [policy-pap-newdelhi-project-csit-pap] $ /bin/bash -l /tmp/jenkins15976940184370345875.sh ---> logs-deploy.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-newdelhi-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-4v0z from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-4v0z/bin to PATH INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-pap-newdelhi-project-csit-pap/77 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt Archives upload complete. INFO: archiving logs to Nexus ---> uname -a: Linux prd-ubuntu1804-docker-8c-8g-28545 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux ---> lscpu: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 8 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC-Rome Processor Stepping: 0 CPU MHz: 2799.998 BogoMIPS: 5599.99 Virtualization: AMD-V Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-7 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities ---> nproc: 8 ---> df -h: Filesystem Size Used Avail Use% Mounted on udev 16G 0 16G 0% /dev tmpfs 3.2G 708K 3.2G 1% /run /dev/vda1 155G 14G 142G 9% / tmpfs 16G 0 16G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/vda15 105M 4.4M 100M 5% /boot/efi tmpfs 3.2G 0 3.2G 0% /run/user/1001 ---> free -m: total used free shared buff/cache available Mem: 32167 885 25136 0 6145 30825 Swap: 1023 0 1023 ---> ip addr: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 link/ether fa:16:3e:e7:ac:f7 brd ff:ff:ff:ff:ff:ff inet 10.30.106.57/23 brd 10.30.107.255 scope global dynamic ens3 valid_lft 86022sec preferred_lft 86022sec inet6 fe80::f816:3eff:fee7:acf7/64 scope link valid_lft forever preferred_lft forever 3: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:fc:0d:29:2d brd ff:ff:ff:ff:ff:ff inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:fcff:fe0d:292d/64 scope link valid_lft forever preferred_lft forever ---> sar -b -r -n DEV: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-28545) 08/03/24 _x86_64_ (8 CPU) 16:59:26 LINUX RESTART (8 CPU) 17:00:02 tps rtps wtps bread/s bwrtn/s 17:01:01 321.62 35.84 285.78 1700.05 34027.45 17:02:01 588.97 30.91 558.06 3074.75 175247.46 17:03:01 187.79 0.55 187.24 58.26 45899.90 17:04:01 22.48 0.00 22.48 0.00 25406.40 17:05:01 50.65 0.45 50.20 20.26 26232.72 Average: 234.00 13.48 220.53 968.19 61453.02 17:00:02 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 17:01:01 30071720 31723436 2867492 8.71 74536 1882696 1371688 4.04 852144 1718464 215548 17:02:01 25301436 30841340 7637776 23.19 138564 5543936 7341840 21.60 1944684 5133832 4492 17:03:01 23451240 29415900 9487972 28.80 168996 5899732 9712584 28.58 3501208 5374576 568 17:04:01 23560340 29527704 9378872 28.47 169256 5900796 9214136 27.11 3400248 5367320 276 17:05:01 25086100 30933668 7853112 23.84 170036 5792036 3506880 10.32 2024732 5263100 92 Average: 25494167 30488410 7445045 22.60 144278 5003839 6229426 18.33 2344603 4571458 44195 17:00:02 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 17:01:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:01:01 ens3 67.38 46.31 1029.66 19.90 0.00 0.00 0.00 0.00 17:01:01 lo 1.36 1.36 0.16 0.16 0.00 0.00 0.00 0.00 17:02:01 veth6ef93d8 19.45 25.60 1.77 3.03 0.00 0.00 0.00 0.00 17:02:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:02:01 veth0e5cfa9 0.58 0.87 0.07 0.39 0.00 0.00 0.00 0.00 17:02:01 vethc602a42 0.03 0.33 0.00 0.02 0.00 0.00 0.00 0.00 17:03:01 veth6ef93d8 34.44 40.06 18.05 12.76 0.00 0.00 0.00 0.00 17:03:01 docker0 14.81 19.28 2.19 285.52 0.00 0.00 0.00 0.00 17:03:01 vethe14ce18 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:03:01 veth0e5cfa9 0.18 0.27 0.01 0.01 0.00 0.00 0.00 0.00 17:04:01 veth6ef93d8 46.11 57.17 59.44 17.08 0.00 0.00 0.00 0.00 17:04:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:04:01 veth0e5cfa9 0.22 0.15 0.02 0.01 0.00 0.00 0.00 0.00 17:04:01 vethc602a42 1.10 1.47 0.10 2.61 0.00 0.00 0.00 0.00 17:05:01 veth6ef93d8 0.38 0.82 0.12 0.12 0.00 0.00 0.00 0.00 17:05:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:05:01 ens3 1537.65 860.55 33936.93 149.77 0.00 0.00 0.00 0.00 17:05:01 lo 27.12 27.12 2.50 2.50 0.00 0.00 0.00 0.00 Average: veth6ef93d8 20.14 24.81 15.93 6.62 0.00 0.00 0.00 0.00 Average: docker0 2.97 3.87 0.44 57.29 0.00 0.00 0.00 0.00 Average: ens3 248.06 128.12 6646.83 18.16 0.00 0.00 0.00 0.00 Average: lo 4.65 4.65 0.44 0.44 0.00 0.00 0.00 0.00 ---> sar -P ALL: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-28545) 08/03/24 _x86_64_ (8 CPU) 16:59:26 LINUX RESTART (8 CPU) 17:00:02 CPU %user %nice %system %iowait %steal %idle 17:01:01 all 9.37 0.00 0.88 3.02 0.03 86.69 17:01:01 0 21.84 0.00 1.15 1.58 0.07 75.36 17:01:01 1 21.37 0.00 1.56 1.39 0.03 75.65 17:01:01 2 4.96 0.00 1.44 3.84 0.05 89.70 17:01:01 3 19.44 0.00 1.43 1.40 0.07 77.67 17:01:01 4 3.70 0.00 0.46 0.02 0.00 95.83 17:01:01 5 1.15 0.00 0.29 0.24 0.02 98.30 17:01:01 6 0.54 0.00 0.15 0.10 0.02 99.18 17:01:01 7 1.95 0.00 0.58 15.65 0.03 81.78 17:02:01 all 17.29 0.00 7.29 10.62 0.09 64.71 17:02:01 0 17.98 0.00 7.75 13.40 0.08 60.78 17:02:01 1 16.64 0.00 6.89 1.50 0.07 74.90 17:02:01 2 18.55 0.00 8.22 5.48 0.08 67.66 17:02:01 3 16.55 0.00 6.74 5.17 0.08 71.45 17:02:01 4 16.71 0.00 6.65 11.99 0.08 64.56 17:02:01 5 15.74 0.00 5.79 1.97 0.08 76.42 17:02:01 6 17.34 0.00 9.15 38.72 0.12 34.67 17:02:01 7 18.85 0.00 7.06 6.82 0.09 67.18 17:03:01 all 25.51 0.00 3.91 2.78 0.10 67.70 17:03:01 0 28.19 0.00 4.11 2.38 0.10 65.21 17:03:01 1 18.52 0.00 3.24 0.20 0.08 77.95 17:03:01 2 24.65 0.00 3.91 0.57 0.12 70.75 17:03:01 3 28.03 0.00 4.28 2.44 0.10 65.15 17:03:01 4 32.32 0.00 4.42 7.50 0.12 55.64 17:03:01 5 17.26 0.00 2.95 4.34 0.10 75.35 17:03:01 6 27.83 0.00 3.69 0.37 0.10 68.01 17:03:01 7 27.29 0.00 4.72 4.48 0.08 63.42 17:04:01 all 5.17 0.00 0.46 1.17 0.07 93.13 17:04:01 0 5.98 0.00 0.42 8.49 0.05 85.07 17:04:01 1 4.09 0.00 0.37 0.03 0.05 95.46 17:04:01 2 5.57 0.00 0.59 0.00 0.08 93.76 17:04:01 3 4.92 0.00 0.43 0.00 0.08 94.56 17:04:01 4 4.51 0.00 0.42 0.72 0.05 94.30 17:04:01 5 3.09 0.00 0.30 0.07 0.05 96.49 17:04:01 6 7.11 0.00 0.62 0.00 0.08 92.19 17:04:01 7 6.03 0.00 0.57 0.03 0.07 93.30 17:05:01 all 1.87 0.00 0.62 1.17 0.06 96.27 17:05:01 0 1.92 0.00 0.70 8.54 0.08 88.75 17:05:01 1 2.30 0.00 0.53 0.12 0.05 97.00 17:05:01 2 1.68 0.00 0.69 0.08 0.07 97.48 17:05:01 3 2.10 0.00 0.67 0.08 0.05 97.10 17:05:01 4 1.97 0.00 0.65 0.08 0.07 97.23 17:05:01 5 1.39 0.00 0.55 0.05 0.07 97.94 17:05:01 6 2.02 0.00 0.55 0.02 0.05 97.36 17:05:01 7 1.60 0.00 0.62 0.40 0.10 97.28 Average: all 11.82 0.00 2.62 3.74 0.07 81.75 Average: 0 15.14 0.00 2.82 6.88 0.08 75.08 Average: 1 12.54 0.00 2.51 0.65 0.06 84.25 Average: 2 11.09 0.00 2.97 1.99 0.08 83.87 Average: 3 14.16 0.00 2.70 1.81 0.08 81.25 Average: 4 11.81 0.00 2.51 4.05 0.06 81.57 Average: 5 7.71 0.00 1.97 1.33 0.06 88.93 Average: 6 10.96 0.00 2.82 7.79 0.07 78.35 Average: 7 11.13 0.00 2.70 5.44 0.07 80.66