Started by timer Running as SYSTEM [EnvInject] - Loading node environment variables. Building remotely on prd-ubuntu1804-docker-8c-8g-37322 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-pap-newdelhi-project-csit-pap [ssh-agent] Looking for ssh-agent implementation... [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) $ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-QDj2vSpKv6JE/agent.2125 SSH_AGENT_PID=2127 [ssh-agent] Started. Running ssh-add (command line suppressed) Identity added: /w/workspace/policy-pap-newdelhi-project-csit-pap@tmp/private_key_6953955635226924593.key (/w/workspace/policy-pap-newdelhi-project-csit-pap@tmp/private_key_6953955635226924593.key) [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) The recommended git tool is: NONE using credential onap-jenkins-ssh Wiping out workspace first. Cloning the remote Git repository Cloning repository git://cloud.onap.org/mirror/policy/docker.git > git init /w/workspace/policy-pap-newdelhi-project-csit-pap # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git > git --version # timeout=10 > git --version # 'git version 2.17.1' using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 Avoid second fetch > git rev-parse refs/remotes/origin/newdelhi^{commit} # timeout=10 Checking out Revision a0de87f9d2d88fd7f870703053c99c7149d608ec (refs/remotes/origin/newdelhi) > git config core.sparsecheckout # timeout=10 > git checkout -f a0de87f9d2d88fd7f870703053c99c7149d608ec # timeout=30 Commit message: "Fix timeout in pap CSIT for auditing undeploys" > git rev-list --no-walk a0de87f9d2d88fd7f870703053c99c7149d608ec # timeout=10 provisioning config files... copy managed file [npmrc] to file:/home/jenkins/.npmrc copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins7656754918943841155.sh ---> python-tools-install.sh Setup pyenv: * system (set by /opt/pyenv/version) * 3.8.13 (set by /opt/pyenv/version) * 3.9.13 (set by /opt/pyenv/version) * 3.10.6 (set by /opt/pyenv/version) lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-M6Od lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-M6Od/bin to PATH Generating Requirements File Python 3.10.6 pip 24.2 from /tmp/venv-M6Od/lib/python3.10/site-packages/pip (python 3.10) appdirs==1.4.4 argcomplete==3.5.0 aspy.yaml==1.3.0 attrs==24.2.0 autopage==0.5.2 beautifulsoup4==4.12.3 boto3==1.35.14 botocore==1.35.14 bs4==0.0.2 cachetools==5.5.0 certifi==2024.8.30 cffi==1.17.1 cfgv==3.4.0 chardet==5.2.0 charset-normalizer==3.3.2 click==8.1.7 cliff==4.7.0 cmd2==2.4.3 cryptography==3.3.2 debtcollector==3.0.0 decorator==5.1.1 defusedxml==0.7.1 Deprecated==1.2.14 distlib==0.3.8 dnspython==2.6.1 docker==4.2.2 dogpile.cache==1.3.3 email_validator==2.2.0 filelock==3.16.0 future==1.0.0 gitdb==4.0.11 GitPython==3.1.43 google-auth==2.34.0 httplib2==0.22.0 identify==2.6.0 idna==3.8 importlib-resources==1.5.0 iso8601==2.1.0 Jinja2==3.1.4 jmespath==1.0.1 jsonpatch==1.33 jsonpointer==3.0.0 jsonschema==4.23.0 jsonschema-specifications==2023.12.1 keystoneauth1==5.8.0 kubernetes==30.1.0 lftools==0.37.10 lxml==5.3.0 MarkupSafe==2.1.5 msgpack==1.0.8 multi_key_dict==2.0.3 munch==4.0.0 netaddr==1.3.0 netifaces==0.11.0 niet==1.4.2 nodeenv==1.9.1 oauth2client==4.1.3 oauthlib==3.2.2 openstacksdk==4.0.0 os-client-config==2.1.0 os-service-types==1.7.0 osc-lib==3.1.0 oslo.config==9.6.0 oslo.context==5.6.0 oslo.i18n==6.4.0 oslo.log==6.1.2 oslo.serialization==5.5.0 oslo.utils==7.3.0 packaging==24.1 pbr==6.1.0 platformdirs==4.3.2 prettytable==3.11.0 pyasn1==0.6.0 pyasn1_modules==0.4.0 pycparser==2.22 pygerrit2==2.0.15 PyGithub==2.4.0 PyJWT==2.9.0 PyNaCl==1.5.0 pyparsing==2.4.7 pyperclip==1.9.0 pyrsistent==0.20.0 python-cinderclient==9.6.0 python-dateutil==2.9.0.post0 python-heatclient==4.0.0 python-jenkins==1.8.2 python-keystoneclient==5.5.0 python-magnumclient==4.7.0 python-openstackclient==7.0.0 python-swiftclient==4.6.0 PyYAML==6.0.2 referencing==0.35.1 requests==2.32.3 requests-oauthlib==2.0.0 requestsexceptions==1.4.0 rfc3986==2.0.0 rpds-py==0.20.0 rsa==4.9 ruamel.yaml==0.18.6 ruamel.yaml.clib==0.2.8 s3transfer==0.10.2 simplejson==3.19.3 six==1.16.0 smmap==5.0.1 soupsieve==2.6 stevedore==5.3.0 tabulate==0.9.0 toml==0.10.2 tomlkit==0.13.2 tqdm==4.66.5 typing_extensions==4.12.2 tzdata==2024.1 urllib3==1.26.20 virtualenv==20.26.4 wcwidth==0.2.13 websocket-client==1.8.0 wrapt==1.16.0 xdg==6.0.0 xmltodict==0.13.0 yq==3.4.3 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SET_JDK_VERSION=openjdk17 GIT_URL="git://cloud.onap.org/mirror" [EnvInject] - Variables injected successfully. [policy-pap-newdelhi-project-csit-pap] $ /bin/sh /tmp/jenkins6167923589653433475.sh ---> update-java-alternatives.sh ---> Updating Java version ---> Ubuntu/Debian system detected update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode openjdk version "17.0.4" 2022-07-19 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' [EnvInject] - Variables injected successfully. [policy-pap-newdelhi-project-csit-pap] $ /bin/sh -xe /tmp/jenkins7835641181789639090.sh + /w/workspace/policy-pap-newdelhi-project-csit-pap/csit/run-project-csit.sh pap WARNING! Using --password via the CLI is insecure. Use --password-stdin. WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded docker: 'compose' is not a docker command. See 'docker --help' Docker Compose Plugin not installed. Installing now... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 60.0M 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 60.0M 100 60.0M 0 0 95.2M 0 --:--:-- --:--:-- --:--:-- 145M Setting project configuration for: pap Configuring docker compose... Starting apex-pdp application with Grafana zookeeper Pulling pap Pulling grafana Pulling kafka Pulling mariadb Pulling policy-db-migrator Pulling prometheus Pulling apex-pdp Pulling api Pulling simulator Pulling 31e352740f53 Pulling fs layer ecc4de98d537 Pulling fs layer 1fe734c5fee3 Pulling fs layer c8e6f0452a8e Pulling fs layer 0143f8517101 Pulling fs layer ee69cc1a77e2 Pulling fs layer 81667b400b57 Pulling fs layer ec3b6d0cc414 Pulling fs layer a8d3998ab21c Pulling fs layer 89d6e2ec6372 Pulling fs layer 80096f8bb25e Pulling fs layer cbd359ebc87d Pulling fs layer c8e6f0452a8e Waiting 81667b400b57 Waiting 89d6e2ec6372 Waiting ec3b6d0cc414 Waiting 80096f8bb25e Waiting cbd359ebc87d Waiting a8d3998ab21c Waiting 0143f8517101 Waiting ee69cc1a77e2 Waiting 31e352740f53 Pulling fs layer ad1782e4d1ef Pulling fs layer bc8105c6553b Pulling fs layer 929241f867bb Pulling fs layer 37728a7352e6 Pulling fs layer 3f40c7aa46a6 Pulling fs layer 353af139d39e Pulling fs layer ad1782e4d1ef Waiting 37728a7352e6 Waiting bc8105c6553b Waiting 929241f867bb Waiting 3f40c7aa46a6 Waiting 353af139d39e Waiting 31e352740f53 Downloading [> ] 48.06kB/3.398MB 31e352740f53 Downloading [> ] 48.06kB/3.398MB 31e352740f53 Pulling fs layer ecc4de98d537 Pulling fs layer bda0b253c68f Pulling fs layer b9357b55a7a5 Pulling fs layer 4c3047628e17 Pulling fs layer 6cf350721225 Pulling fs layer de723b4c7ed9 Pulling fs layer 31e352740f53 Downloading [> ] 48.06kB/3.398MB bda0b253c68f Waiting b9357b55a7a5 Waiting 4c3047628e17 Waiting 6cf350721225 Waiting de723b4c7ed9 Waiting 31e352740f53 Pulling fs layer ecc4de98d537 Pulling fs layer 665dfb3388a1 Pulling fs layer f270a5fd7930 Pulling fs layer 9038eaba24f8 Pulling fs layer 04a7796b82ca Pulling fs layer 31e352740f53 Downloading [> ] 48.06kB/3.398MB 665dfb3388a1 Waiting f270a5fd7930 Waiting 9038eaba24f8 Waiting 04a7796b82ca Waiting 31e352740f53 Pulling fs layer ecc4de98d537 Pulling fs layer 145e9fcd3938 Pulling fs layer 4be774fd73e2 Pulling fs layer 71f834c33815 Pulling fs layer a40760cd2625 Pulling fs layer 114f99593bd8 Pulling fs layer 31e352740f53 Downloading [> ] 48.06kB/3.398MB 71f834c33815 Waiting 4be774fd73e2 Waiting 145e9fcd3938 Waiting a40760cd2625 Waiting 114f99593bd8 Waiting 10ac4908093d Pulling fs layer 44779101e748 Pulling fs layer a721db3e3f3d Pulling fs layer 1850a929b84a Pulling fs layer 397a918c7da3 Pulling fs layer 806be17e856d Pulling fs layer 634de6c90876 Pulling fs layer cd00854cfb1a Pulling fs layer 10ac4908093d Waiting 44779101e748 Waiting a721db3e3f3d Waiting 1850a929b84a Waiting 397a918c7da3 Waiting 806be17e856d Waiting 634de6c90876 Waiting cd00854cfb1a Waiting 1fe734c5fee3 Downloading [> ] 343kB/32.94MB ecc4de98d537 Downloading [> ] 539.6kB/73.93MB ecc4de98d537 Downloading [> ] 539.6kB/73.93MB ecc4de98d537 Downloading [> ] 539.6kB/73.93MB ecc4de98d537 Downloading [> ] 539.6kB/73.93MB 9fa9226be034 Pulling fs layer 1617e25568b2 Pulling fs layer 02203e3d6934 Pulling fs layer 8be4b7271108 Pulling fs layer 8becc689631f Pulling fs layer ceaeea15c1bf Pulling fs layer 9fa9226be034 Waiting 1617e25568b2 Waiting 564720d6ed13 Pulling fs layer 1fd5d47e09da Pulling fs layer 1afe4a0d7329 Pulling fs layer 8be4b7271108 Waiting bd55ccfa5aad Pulling fs layer 54f884861fc1 Pulling fs layer 8becc689631f Waiting b09316e948c6 Pulling fs layer ceaeea15c1bf Waiting 1afe4a0d7329 Waiting 564720d6ed13 Waiting 1fd5d47e09da Waiting b09316e948c6 Waiting bd55ccfa5aad Waiting 54f884861fc1 Waiting 4abcf2066143 Pulling fs layer 39aee5fd3406 Pulling fs layer 592f1e71407c Pulling fs layer 66aec874ce0c Pulling fs layer bde37282dfba Pulling fs layer b6982d0733af Pulling fs layer ab3c28da242b Pulling fs layer e4892977d944 Pulling fs layer ef2b3f3f597e Pulling fs layer 27a3c8ebdfbf Pulling fs layer 4abcf2066143 Waiting 39aee5fd3406 Waiting 592f1e71407c Waiting 66aec874ce0c Waiting bde37282dfba Waiting ab3c28da242b Waiting b6982d0733af Waiting e4892977d944 Waiting ef2b3f3f597e Waiting 27a3c8ebdfbf Waiting 31e352740f53 Verifying Checksum 31e352740f53 Verifying Checksum 31e352740f53 Verifying Checksum 31e352740f53 Verifying Checksum 31e352740f53 Verifying Checksum 31e352740f53 Download complete 31e352740f53 Download complete 31e352740f53 Download complete 31e352740f53 Download complete 31e352740f53 Download complete 31e352740f53 Extracting [> ] 65.54kB/3.398MB 31e352740f53 Extracting [> ] 65.54kB/3.398MB 31e352740f53 Extracting [> ] 65.54kB/3.398MB 31e352740f53 Extracting [> ] 65.54kB/3.398MB 31e352740f53 Extracting [> ] 65.54kB/3.398MB c8e6f0452a8e Download complete 0143f8517101 Downloading [============================> ] 3.003kB/5.324kB 0143f8517101 Downloading [==================================================>] 5.324kB/5.324kB 0143f8517101 Verifying Checksum 0143f8517101 Download complete ee69cc1a77e2 Downloading [============================> ] 3.003kB/5.312kB ee69cc1a77e2 Downloading [==================================================>] 5.312kB/5.312kB ee69cc1a77e2 Verifying Checksum ee69cc1a77e2 Download complete 1fe734c5fee3 Downloading [========> ] 5.848MB/32.94MB 81667b400b57 Downloading [==================================================>] 1.034kB/1.034kB ecc4de98d537 Downloading [====> ] 6.487MB/73.93MB ecc4de98d537 Downloading [====> ] 6.487MB/73.93MB ecc4de98d537 Downloading [====> ] 6.487MB/73.93MB ecc4de98d537 Downloading [====> ] 6.487MB/73.93MB ec3b6d0cc414 Downloading [==================================================>] 1.036kB/1.036kB ec3b6d0cc414 Download complete 31e352740f53 Extracting [==================================> ] 2.359MB/3.398MB 31e352740f53 Extracting [==================================> ] 2.359MB/3.398MB 31e352740f53 Extracting [==================================> ] 2.359MB/3.398MB 31e352740f53 Extracting [==================================> ] 2.359MB/3.398MB 31e352740f53 Extracting [==================================> ] 2.359MB/3.398MB a8d3998ab21c Downloading [==========> ] 3.002kB/13.9kB a8d3998ab21c Download complete 89d6e2ec6372 Downloading [==========> ] 3.002kB/13.79kB 89d6e2ec6372 Downloading [==================================================>] 13.79kB/13.79kB 89d6e2ec6372 Download complete 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 80096f8bb25e Download complete 1fe734c5fee3 Downloading [==========================> ] 17.55MB/32.94MB ecc4de98d537 Downloading [============> ] 18.38MB/73.93MB ecc4de98d537 Downloading [============> ] 18.38MB/73.93MB ecc4de98d537 Downloading [============> ] 18.38MB/73.93MB ecc4de98d537 Downloading [============> ] 18.38MB/73.93MB cbd359ebc87d Downloading [==================================================>] 2.23kB/2.23kB cbd359ebc87d Verifying Checksum cbd359ebc87d Download complete 31e352740f53 Pull complete 31e352740f53 Pull complete 31e352740f53 Pull complete 31e352740f53 Pull complete 31e352740f53 Pull complete ad1782e4d1ef Downloading [> ] 539.6kB/180.4MB 4798a7e93601 Pulling fs layer a453f30e82bf Pulling fs layer 016e383f3f47 Pulling fs layer f7d27dafad0a Pulling fs layer 56ccc8be1ca0 Pulling fs layer f77f01ac624c Pulling fs layer 1c6e35a73ed7 Pulling fs layer aa5e151b62ff Pulling fs layer 262d375318c3 Pulling fs layer 016e383f3f47 Waiting 4798a7e93601 Waiting 56ccc8be1ca0 Waiting f77f01ac624c Waiting 1c6e35a73ed7 Waiting f7d27dafad0a Waiting a453f30e82bf Waiting aa5e151b62ff Waiting 28a7d18ebda4 Pulling fs layer bdc615dfc787 Pulling fs layer 33966fd36306 Pulling fs layer 262d375318c3 Waiting 8b4455fb60b9 Pulling fs layer bdc615dfc787 Waiting 8b4455fb60b9 Waiting 33966fd36306 Waiting 4798a7e93601 Pulling fs layer a453f30e82bf Pulling fs layer 016e383f3f47 Pulling fs layer f7d27dafad0a Pulling fs layer 56ccc8be1ca0 Pulling fs layer f77f01ac624c Pulling fs layer 1c6e35a73ed7 Pulling fs layer aa5e151b62ff Pulling fs layer 262d375318c3 Pulling fs layer 016e383f3f47 Waiting 28a7d18ebda4 Pulling fs layer f77f01ac624c Waiting bdc615dfc787 Pulling fs layer f7d27dafad0a Waiting ab973a5038b6 Pulling fs layer 56ccc8be1ca0 Waiting a453f30e82bf Waiting aa5e151b62ff Waiting 5aee3e0528f7 Pulling fs layer 1c6e35a73ed7 Waiting bdc615dfc787 Waiting ab973a5038b6 Waiting 262d375318c3 Waiting 28a7d18ebda4 Waiting 4798a7e93601 Waiting 1fe734c5fee3 Downloading [==============================================> ] 30.62MB/32.94MB ecc4de98d537 Downloading [=====================> ] 31.9MB/73.93MB ecc4de98d537 Downloading [=====================> ] 31.9MB/73.93MB ecc4de98d537 Downloading [=====================> ] 31.9MB/73.93MB ecc4de98d537 Downloading [=====================> ] 31.9MB/73.93MB 1fe734c5fee3 Verifying Checksum 1fe734c5fee3 Download complete bc8105c6553b Downloading [=> ] 3.002kB/84.13kB bc8105c6553b Downloading [==================================================>] 84.13kB/84.13kB bc8105c6553b Download complete 929241f867bb Downloading [==================================================>] 92B/92B 929241f867bb Verifying Checksum 929241f867bb Download complete ad1782e4d1ef Downloading [=> ] 3.784MB/180.4MB 37728a7352e6 Downloading [==================================================>] 92B/92B 37728a7352e6 Download complete 3f40c7aa46a6 Downloading [==================================================>] 302B/302B 3f40c7aa46a6 Verifying Checksum 3f40c7aa46a6 Download complete ecc4de98d537 Downloading [================================> ] 47.58MB/73.93MB ecc4de98d537 Downloading [================================> ] 47.58MB/73.93MB ecc4de98d537 Downloading [================================> ] 47.58MB/73.93MB ecc4de98d537 Downloading [================================> ] 47.58MB/73.93MB 353af139d39e Downloading [> ] 539.6kB/246.5MB ad1782e4d1ef Downloading [===> ] 12.98MB/180.4MB ecc4de98d537 Downloading [========================================> ] 60.55MB/73.93MB ecc4de98d537 Downloading [========================================> ] 60.55MB/73.93MB ecc4de98d537 Downloading [========================================> ] 60.55MB/73.93MB ecc4de98d537 Downloading [========================================> ] 60.55MB/73.93MB 353af139d39e Downloading [=> ] 8.65MB/246.5MB ad1782e4d1ef Downloading [=======> ] 25.41MB/180.4MB ecc4de98d537 Verifying Checksum ecc4de98d537 Download complete ecc4de98d537 Download complete ecc4de98d537 Download complete ecc4de98d537 Verifying Checksum ecc4de98d537 Download complete bda0b253c68f Downloading [==================================================>] 292B/292B bda0b253c68f Verifying Checksum bda0b253c68f Download complete b9357b55a7a5 Downloading [=> ] 3.001kB/127kB b9357b55a7a5 Downloading [==================================================>] 127kB/127kB b9357b55a7a5 Verifying Checksum b9357b55a7a5 Download complete 353af139d39e Downloading [===> ] 19.46MB/246.5MB 4c3047628e17 Downloading [==================================================>] 1.324kB/1.324kB 4c3047628e17 Download complete ad1782e4d1ef Downloading [===========> ] 40.01MB/180.4MB 6cf350721225 Downloading [> ] 539.6kB/98.32MB ecc4de98d537 Extracting [> ] 557.1kB/73.93MB ecc4de98d537 Extracting [> ] 557.1kB/73.93MB ecc4de98d537 Extracting [> ] 557.1kB/73.93MB ecc4de98d537 Extracting [> ] 557.1kB/73.93MB 353af139d39e Downloading [=======> ] 36.22MB/246.5MB ad1782e4d1ef Downloading [===============> ] 57.31MB/180.4MB 6cf350721225 Downloading [===> ] 5.946MB/98.32MB ecc4de98d537 Extracting [===> ] 4.456MB/73.93MB ecc4de98d537 Extracting [===> ] 4.456MB/73.93MB ecc4de98d537 Extracting [===> ] 4.456MB/73.93MB ecc4de98d537 Extracting [===> ] 4.456MB/73.93MB 353af139d39e Downloading [==========> ] 50.82MB/246.5MB ad1782e4d1ef Downloading [===================> ] 70.83MB/180.4MB 6cf350721225 Downloading [=========> ] 17.84MB/98.32MB ecc4de98d537 Extracting [=======> ] 10.58MB/73.93MB ecc4de98d537 Extracting [=======> ] 10.58MB/73.93MB ecc4de98d537 Extracting [=======> ] 10.58MB/73.93MB ecc4de98d537 Extracting [=======> ] 10.58MB/73.93MB 353af139d39e Downloading [============> ] 63.26MB/246.5MB ad1782e4d1ef Downloading [=======================> ] 83.26MB/180.4MB 6cf350721225 Downloading [=================> ] 33.52MB/98.32MB ecc4de98d537 Extracting [===========> ] 16.71MB/73.93MB ecc4de98d537 Extracting [===========> ] 16.71MB/73.93MB ecc4de98d537 Extracting [===========> ] 16.71MB/73.93MB ecc4de98d537 Extracting [===========> ] 16.71MB/73.93MB 353af139d39e Downloading [===============> ] 76.23MB/246.5MB ad1782e4d1ef Downloading [==========================> ] 95.7MB/180.4MB 6cf350721225 Downloading [=======================> ] 45.96MB/98.32MB ecc4de98d537 Extracting [===============> ] 22.28MB/73.93MB ecc4de98d537 Extracting [===============> ] 22.28MB/73.93MB ecc4de98d537 Extracting [===============> ] 22.28MB/73.93MB ecc4de98d537 Extracting [===============> ] 22.28MB/73.93MB 353af139d39e Downloading [==================> ] 89.75MB/246.5MB ad1782e4d1ef Downloading [==============================> ] 109.8MB/180.4MB 6cf350721225 Downloading [==============================> ] 60.01MB/98.32MB ecc4de98d537 Extracting [==================> ] 27.85MB/73.93MB ecc4de98d537 Extracting [==================> ] 27.85MB/73.93MB ecc4de98d537 Extracting [==================> ] 27.85MB/73.93MB ecc4de98d537 Extracting [==================> ] 27.85MB/73.93MB 353af139d39e Downloading [====================> ] 102.2MB/246.5MB ad1782e4d1ef Downloading [=================================> ] 121.7MB/180.4MB 6cf350721225 Downloading [====================================> ] 71.91MB/98.32MB ecc4de98d537 Extracting [======================> ] 33.98MB/73.93MB ecc4de98d537 Extracting [======================> ] 33.98MB/73.93MB ecc4de98d537 Extracting [======================> ] 33.98MB/73.93MB ecc4de98d537 Extracting [======================> ] 33.98MB/73.93MB 353af139d39e Downloading [=======================> ] 115.7MB/246.5MB ad1782e4d1ef Downloading [=====================================> ] 134.6MB/180.4MB 6cf350721225 Downloading [===========================================> ] 84.88MB/98.32MB 353af139d39e Downloading [=========================> ] 127.1MB/246.5MB ecc4de98d537 Extracting [=========================> ] 38.44MB/73.93MB ecc4de98d537 Extracting [=========================> ] 38.44MB/73.93MB ecc4de98d537 Extracting [=========================> ] 38.44MB/73.93MB ecc4de98d537 Extracting [=========================> ] 38.44MB/73.93MB ad1782e4d1ef Downloading [========================================> ] 146.5MB/180.4MB 6cf350721225 Downloading [================================================> ] 96.24MB/98.32MB 6cf350721225 Verifying Checksum 6cf350721225 Download complete de723b4c7ed9 Downloading [==================================================>] 1.297kB/1.297kB de723b4c7ed9 Verifying Checksum de723b4c7ed9 Download complete 665dfb3388a1 Downloading [==================================================>] 303B/303B 665dfb3388a1 Verifying Checksum 665dfb3388a1 Download complete 353af139d39e Downloading [============================> ] 139.5MB/246.5MB ecc4de98d537 Extracting [=============================> ] 44.01MB/73.93MB ecc4de98d537 Extracting [=============================> ] 44.01MB/73.93MB ecc4de98d537 Extracting [=============================> ] 44.01MB/73.93MB ecc4de98d537 Extracting [=============================> ] 44.01MB/73.93MB ad1782e4d1ef Downloading [============================================> ] 161.1MB/180.4MB f270a5fd7930 Downloading [> ] 539.6kB/159.1MB 353af139d39e Downloading [===============================> ] 153MB/246.5MB ecc4de98d537 Extracting [=================================> ] 49.58MB/73.93MB ecc4de98d537 Extracting [=================================> ] 49.58MB/73.93MB ecc4de98d537 Extracting [=================================> ] 49.58MB/73.93MB ecc4de98d537 Extracting [=================================> ] 49.58MB/73.93MB ad1782e4d1ef Downloading [===============================================> ] 171.9MB/180.4MB f270a5fd7930 Downloading [===> ] 10.27MB/159.1MB ad1782e4d1ef Verifying Checksum ad1782e4d1ef Download complete 353af139d39e Downloading [=================================> ] 163.3MB/246.5MB 9038eaba24f8 Downloading [==================================================>] 1.153kB/1.153kB 9038eaba24f8 Verifying Checksum 9038eaba24f8 Download complete ecc4de98d537 Extracting [====================================> ] 54.59MB/73.93MB ecc4de98d537 Extracting [====================================> ] 54.59MB/73.93MB ecc4de98d537 Extracting [====================================> ] 54.59MB/73.93MB ecc4de98d537 Extracting [====================================> ] 54.59MB/73.93MB f270a5fd7930 Downloading [======> ] 19.46MB/159.1MB 04a7796b82ca Downloading [==================================================>] 1.127kB/1.127kB 04a7796b82ca Downloading [==================================================>] 1.127kB/1.127kB 04a7796b82ca Download complete 145e9fcd3938 Downloading [==================================================>] 294B/294B 145e9fcd3938 Verifying Checksum 145e9fcd3938 Download complete 4be774fd73e2 Downloading [=> ] 3.001kB/127.4kB 4be774fd73e2 Downloading [==================================================>] 127.4kB/127.4kB ad1782e4d1ef Extracting [> ] 557.1kB/180.4MB 71f834c33815 Download complete 353af139d39e Downloading [====================================> ] 180MB/246.5MB ecc4de98d537 Extracting [========================================> ] 59.6MB/73.93MB ecc4de98d537 Extracting [========================================> ] 59.6MB/73.93MB ecc4de98d537 Extracting [========================================> ] 59.6MB/73.93MB ecc4de98d537 Extracting [========================================> ] 59.6MB/73.93MB f270a5fd7930 Downloading [==========> ] 34.06MB/159.1MB a40760cd2625 Downloading [> ] 539.6kB/84.46MB ad1782e4d1ef Extracting [=> ] 6.685MB/180.4MB 353af139d39e Downloading [======================================> ] 191.9MB/246.5MB ecc4de98d537 Extracting [============================================> ] 65.18MB/73.93MB ecc4de98d537 Extracting [============================================> ] 65.18MB/73.93MB ecc4de98d537 Extracting [============================================> ] 65.18MB/73.93MB ecc4de98d537 Extracting [============================================> ] 65.18MB/73.93MB f270a5fd7930 Downloading [===============> ] 48.12MB/159.1MB a40760cd2625 Downloading [====> ] 7.028MB/84.46MB ad1782e4d1ef Extracting [=====> ] 20.05MB/180.4MB 353af139d39e Downloading [=========================================> ] 202.8MB/246.5MB ecc4de98d537 Extracting [===============================================> ] 70.75MB/73.93MB ecc4de98d537 Extracting [===============================================> ] 70.75MB/73.93MB ecc4de98d537 Extracting [===============================================> ] 70.75MB/73.93MB ecc4de98d537 Extracting [===============================================> ] 70.75MB/73.93MB ad1782e4d1ef Extracting [========> ] 32.31MB/180.4MB ecc4de98d537 Extracting [==================================================>] 73.93MB/73.93MB ecc4de98d537 Extracting [==================================================>] 73.93MB/73.93MB ecc4de98d537 Extracting [==================================================>] 73.93MB/73.93MB ecc4de98d537 Extracting [==================================================>] 73.93MB/73.93MB ad1782e4d1ef Extracting [===========> ] 42.34MB/180.4MB a40760cd2625 Downloading [======> ] 10.27MB/84.46MB ecc4de98d537 Pull complete ecc4de98d537 Pull complete ecc4de98d537 Pull complete ecc4de98d537 Pull complete 665dfb3388a1 Extracting [==================================================>] 303B/303B bda0b253c68f Extracting [==================================================>] 292B/292B 145e9fcd3938 Extracting [==================================================>] 294B/294B 665dfb3388a1 Extracting [==================================================>] 303B/303B bda0b253c68f Extracting [==================================================>] 292B/292B 145e9fcd3938 Extracting [==================================================>] 294B/294B ad1782e4d1ef Extracting [==============> ] 51.81MB/180.4MB 1fe734c5fee3 Extracting [> ] 360.4kB/32.94MB 145e9fcd3938 Pull complete 4be774fd73e2 Extracting [============> ] 32.77kB/127.4kB 4be774fd73e2 Extracting [==================================================>] 127.4kB/127.4kB ad1782e4d1ef Extracting [=================> ] 62.39MB/180.4MB 665dfb3388a1 Pull complete bda0b253c68f Pull complete b9357b55a7a5 Extracting [============> ] 32.77kB/127kB b9357b55a7a5 Extracting [==================================================>] 127kB/127kB 1fe734c5fee3 Extracting [======> ] 4.325MB/32.94MB 4be774fd73e2 Pull complete 71f834c33815 Extracting [==================================================>] 1.147kB/1.147kB 71f834c33815 Extracting [==================================================>] 1.147kB/1.147kB ad1782e4d1ef Extracting [====================> ] 73.53MB/180.4MB b9357b55a7a5 Pull complete 4c3047628e17 Extracting [==================================================>] 1.324kB/1.324kB 4c3047628e17 Extracting [==================================================>] 1.324kB/1.324kB 1fe734c5fee3 Extracting [===========> ] 7.569MB/32.94MB 71f834c33815 Pull complete ad1782e4d1ef Extracting [=======================> ] 83.56MB/180.4MB 1fe734c5fee3 Extracting [===================> ] 12.62MB/32.94MB 4c3047628e17 Pull complete ad1782e4d1ef Extracting [=========================> ] 91.91MB/180.4MB 1fe734c5fee3 Extracting [========================> ] 16.22MB/32.94MB 6cf350721225 Extracting [> ] 557.1kB/98.32MB ad1782e4d1ef Extracting [==========================> ] 96.37MB/180.4MB 1fe734c5fee3 Extracting [==============================> ] 20.19MB/32.94MB 6cf350721225 Extracting [=======> ] 15.04MB/98.32MB ad1782e4d1ef Extracting [============================> ] 101.4MB/180.4MB 353af139d39e Downloading [=========================================> ] 203.8MB/246.5MB a40760cd2625 Downloading [======> ] 10.81MB/84.46MB f270a5fd7930 Downloading [================> ] 54.07MB/159.1MB 6cf350721225 Extracting [============> ] 24.51MB/98.32MB 1fe734c5fee3 Extracting [==================================> ] 22.71MB/32.94MB ad1782e4d1ef Extracting [=============================> ] 105.3MB/180.4MB 6cf350721225 Extracting [===================> ] 38.99MB/98.32MB 1fe734c5fee3 Extracting [====================================> ] 23.79MB/32.94MB ad1782e4d1ef Extracting [==============================> ] 108.6MB/180.4MB 6cf350721225 Extracting [===========================> ] 53.48MB/98.32MB 1fe734c5fee3 Extracting [=======================================> ] 26.31MB/32.94MB ad1782e4d1ef Extracting [===============================> ] 112MB/180.4MB 6cf350721225 Extracting [==================================> ] 67.96MB/98.32MB 1fe734c5fee3 Extracting [=========================================> ] 27.39MB/32.94MB ad1782e4d1ef Extracting [================================> ] 115.9MB/180.4MB 6cf350721225 Extracting [==========================================> ] 83MB/98.32MB 1fe734c5fee3 Extracting [=============================================> ] 30.28MB/32.94MB ad1782e4d1ef Extracting [=================================> ] 120.3MB/180.4MB 6cf350721225 Extracting [=================================================> ] 98.04MB/98.32MB 6cf350721225 Extracting [==================================================>] 98.32MB/98.32MB 1fe734c5fee3 Extracting [==================================================>] 32.94MB/32.94MB ad1782e4d1ef Extracting [==================================> ] 124.8MB/180.4MB 6cf350721225 Pull complete de723b4c7ed9 Extracting [==================================================>] 1.297kB/1.297kB de723b4c7ed9 Extracting [==================================================>] 1.297kB/1.297kB 1fe734c5fee3 Pull complete c8e6f0452a8e Extracting [==================================================>] 1.076kB/1.076kB c8e6f0452a8e Extracting [==================================================>] 1.076kB/1.076kB ad1782e4d1ef Extracting [===================================> ] 128.7MB/180.4MB de723b4c7ed9 Pull complete c8e6f0452a8e Pull complete 0143f8517101 Extracting [==================================================>] 5.324kB/5.324kB 0143f8517101 Extracting [==================================================>] 5.324kB/5.324kB pap Pulled ad1782e4d1ef Extracting [=====================================> ] 133.7MB/180.4MB 0143f8517101 Pull complete ee69cc1a77e2 Extracting [==================================================>] 5.312kB/5.312kB ee69cc1a77e2 Extracting [==================================================>] 5.312kB/5.312kB ad1782e4d1ef Extracting [======================================> ] 138.7MB/180.4MB ee69cc1a77e2 Pull complete 81667b400b57 Extracting [==================================================>] 1.034kB/1.034kB 81667b400b57 Extracting [==================================================>] 1.034kB/1.034kB 353af139d39e Downloading [==========================================> ] 211.9MB/246.5MB f270a5fd7930 Downloading [===================> ] 62.18MB/159.1MB ad1782e4d1ef Extracting [=======================================> ] 144.3MB/180.4MB 353af139d39e Downloading [==============================================> ] 228.2MB/246.5MB f270a5fd7930 Downloading [========================> ] 78.4MB/159.1MB 81667b400b57 Pull complete ec3b6d0cc414 Extracting [==================================================>] 1.036kB/1.036kB ec3b6d0cc414 Extracting [==================================================>] 1.036kB/1.036kB ad1782e4d1ef Extracting [=========================================> ] 151MB/180.4MB 353af139d39e Downloading [=================================================> ] 242.8MB/246.5MB f270a5fd7930 Downloading [============================> ] 91.91MB/159.1MB 353af139d39e Verifying Checksum 353af139d39e Download complete ad1782e4d1ef Extracting [===========================================> ] 156.5MB/180.4MB 114f99593bd8 Downloading [==================================================>] 1.119kB/1.119kB 114f99593bd8 Verifying Checksum 114f99593bd8 Download complete 10ac4908093d Downloading [> ] 310.2kB/30.43MB f270a5fd7930 Downloading [=================================> ] 105.4MB/159.1MB ad1782e4d1ef Extracting [=============================================> ] 162.7MB/180.4MB 10ac4908093d Downloading [======> ] 3.734MB/30.43MB f270a5fd7930 Downloading [=======================================> ] 124.4MB/159.1MB ad1782e4d1ef Extracting [=============================================> ] 164.3MB/180.4MB 10ac4908093d Downloading [=================> ] 10.89MB/30.43MB f270a5fd7930 Downloading [============================================> ] 140.6MB/159.1MB ad1782e4d1ef Extracting [==============================================> ] 167.1MB/180.4MB 10ac4908093d Downloading [================================> ] 19.61MB/30.43MB f270a5fd7930 Verifying Checksum f270a5fd7930 Download complete 44779101e748 Downloading [==================================================>] 1.744kB/1.744kB 44779101e748 Verifying Checksum 44779101e748 Download complete a721db3e3f3d Downloading [> ] 64.45kB/5.526MB ad1782e4d1ef Extracting [===============================================> ] 170.5MB/180.4MB 10ac4908093d Verifying Checksum 10ac4908093d Download complete 1850a929b84a Downloading [==================================================>] 149B/149B 1850a929b84a Download complete f270a5fd7930 Extracting [> ] 557.1kB/159.1MB a721db3e3f3d Verifying Checksum a721db3e3f3d Download complete 397a918c7da3 Downloading [==================================================>] 327B/327B 397a918c7da3 Verifying Checksum 397a918c7da3 Download complete 634de6c90876 Downloading [===========================================> ] 3.011kB/3.49kB 634de6c90876 Downloading [==================================================>] 3.49kB/3.49kB 634de6c90876 Verifying Checksum 634de6c90876 Download complete 806be17e856d Downloading [> ] 539.6kB/89.72MB cd00854cfb1a Downloading [=====================> ] 3.011kB/6.971kB cd00854cfb1a Downloading [==================================================>] 6.971kB/6.971kB cd00854cfb1a Verifying Checksum cd00854cfb1a Download complete 9fa9226be034 Downloading [> ] 15.3kB/783kB ad1782e4d1ef Extracting [===============================================> ] 172.7MB/180.4MB 9fa9226be034 Downloading [==================================================>] 783kB/783kB 9fa9226be034 Verifying Checksum 9fa9226be034 Download complete 9fa9226be034 Extracting [==> ] 32.77kB/783kB 10ac4908093d Extracting [> ] 327.7kB/30.43MB f270a5fd7930 Extracting [==> ] 8.356MB/159.1MB 1617e25568b2 Downloading [=> ] 15.3kB/480.9kB 806be17e856d Downloading [===> ] 6.487MB/89.72MB 1617e25568b2 Download complete 9fa9226be034 Extracting [==================================================>] 783kB/783kB 9fa9226be034 Extracting [==================================================>] 783kB/783kB 02203e3d6934 Downloading [> ] 539.6kB/56.02MB 10ac4908093d Extracting [===> ] 2.294MB/30.43MB ad1782e4d1ef Extracting [================================================> ] 173.8MB/180.4MB f270a5fd7930 Extracting [====> ] 13.93MB/159.1MB 806be17e856d Downloading [==========> ] 18.92MB/89.72MB 02203e3d6934 Downloading [======> ] 7.568MB/56.02MB 10ac4908093d Extracting [======> ] 4.26MB/30.43MB f270a5fd7930 Extracting [=====> ] 18.94MB/159.1MB ad1782e4d1ef Extracting [================================================> ] 176MB/180.4MB 806be17e856d Downloading [=============> ] 24.33MB/89.72MB ec3b6d0cc414 Pull complete 02203e3d6934 Downloading [================> ] 18.38MB/56.02MB f270a5fd7930 Extracting [=======> ] 25.07MB/159.1MB 806be17e856d Downloading [====================> ] 36.22MB/89.72MB 10ac4908093d Extracting [============> ] 7.537MB/30.43MB a8d3998ab21c Extracting [==================================================>] 13.9kB/13.9kB a8d3998ab21c Extracting [==================================================>] 13.9kB/13.9kB 02203e3d6934 Downloading [=======================> ] 25.95MB/56.02MB 9fa9226be034 Pull complete 1617e25568b2 Extracting [===> ] 32.77kB/480.9kB 806be17e856d Downloading [===========================> ] 49.2MB/89.72MB f270a5fd7930 Extracting [=========> ] 31.2MB/159.1MB ad1782e4d1ef Extracting [=================================================> ] 177.7MB/180.4MB 10ac4908093d Extracting [=============> ] 8.52MB/30.43MB 02203e3d6934 Downloading [=================================> ] 37.31MB/56.02MB 806be17e856d Downloading [==================================> ] 62.72MB/89.72MB 1617e25568b2 Extracting [==================================> ] 327.7kB/480.9kB ad1782e4d1ef Extracting [=================================================> ] 179.4MB/180.4MB f270a5fd7930 Extracting [===========> ] 37.32MB/159.1MB 10ac4908093d Extracting [=================> ] 10.81MB/30.43MB a8d3998ab21c Pull complete 02203e3d6934 Downloading [============================================> ] 49.74MB/56.02MB 89d6e2ec6372 Extracting [==================================================>] 13.79kB/13.79kB 89d6e2ec6372 Extracting [==================================================>] 13.79kB/13.79kB 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB 806be17e856d Downloading [===========================================> ] 77.86MB/89.72MB f270a5fd7930 Extracting [=============> ] 42.89MB/159.1MB 02203e3d6934 Verifying Checksum 02203e3d6934 Download complete 10ac4908093d Extracting [====================> ] 12.78MB/30.43MB 8be4b7271108 Downloading [> ] 523.2kB/50.82MB 806be17e856d Verifying Checksum 806be17e856d Download complete f270a5fd7930 Extracting [================> ] 51.25MB/159.1MB 10ac4908093d Extracting [======================> ] 13.76MB/30.43MB ad1782e4d1ef Extracting [==================================================>] 180.4MB/180.4MB 8be4b7271108 Downloading [===> ] 3.145MB/50.82MB 1617e25568b2 Pull complete 89d6e2ec6372 Pull complete 80096f8bb25e Extracting [==================================================>] 2.238kB/2.238kB 80096f8bb25e Extracting [==================================================>] 2.238kB/2.238kB f270a5fd7930 Extracting [==================> ] 58.49MB/159.1MB 10ac4908093d Extracting [===========================> ] 17.04MB/30.43MB ad1782e4d1ef Pull complete bc8105c6553b Extracting [===================> ] 32.77kB/84.13kB bc8105c6553b Extracting [==================================================>] 84.13kB/84.13kB bc8105c6553b Extracting [==================================================>] 84.13kB/84.13kB 02203e3d6934 Extracting [> ] 557.1kB/56.02MB f270a5fd7930 Extracting [====================> ] 66.29MB/159.1MB 10ac4908093d Extracting [====================================> ] 21.95MB/30.43MB 80096f8bb25e Pull complete cbd359ebc87d Extracting [==================================================>] 2.23kB/2.23kB cbd359ebc87d Extracting [==================================================>] 2.23kB/2.23kB bc8105c6553b Pull complete 929241f867bb Extracting [==================================================>] 92B/92B 929241f867bb Extracting [==================================================>] 92B/92B 02203e3d6934 Extracting [===> ] 3.899MB/56.02MB f270a5fd7930 Extracting [========================> ] 78.54MB/159.1MB 10ac4908093d Extracting [===========================================> ] 26.21MB/30.43MB cbd359ebc87d Pull complete policy-db-migrator Pulled 02203e3d6934 Extracting [=====> ] 6.128MB/56.02MB 929241f867bb Pull complete f270a5fd7930 Extracting [===========================> ] 88.01MB/159.1MB 37728a7352e6 Extracting [==================================================>] 92B/92B 37728a7352e6 Extracting [==================================================>] 92B/92B 10ac4908093d Extracting [============================================> ] 26.87MB/30.43MB 02203e3d6934 Extracting [========> ] 10.03MB/56.02MB f270a5fd7930 Extracting [================================> ] 101.9MB/159.1MB 10ac4908093d Extracting [=================================================> ] 30.15MB/30.43MB 37728a7352e6 Pull complete 3f40c7aa46a6 Extracting [==================================================>] 302B/302B 3f40c7aa46a6 Extracting [==================================================>] 302B/302B 02203e3d6934 Extracting [===========> ] 13.37MB/56.02MB f270a5fd7930 Extracting [==================================> ] 109.2MB/159.1MB 10ac4908093d Extracting [==================================================>] 30.43MB/30.43MB 02203e3d6934 Extracting [==============> ] 16.71MB/56.02MB f270a5fd7930 Extracting [=====================================> ] 119.8MB/159.1MB 3f40c7aa46a6 Pull complete 10ac4908093d Pull complete 44779101e748 Extracting [==================================================>] 1.744kB/1.744kB 44779101e748 Extracting [==================================================>] 1.744kB/1.744kB 8becc689631f Downloading [==================================================>] 604B/604B 8becc689631f Verifying Checksum 8becc689631f Download complete a40760cd2625 Downloading [=======> ] 12.43MB/84.46MB 8be4b7271108 Downloading [===> ] 3.669MB/50.82MB f270a5fd7930 Extracting [========================================> ] 128.7MB/159.1MB ceaeea15c1bf Downloading [==================================================>] 2.678kB/2.678kB ceaeea15c1bf Verifying Checksum ceaeea15c1bf Download complete 02203e3d6934 Extracting [===================> ] 21.73MB/56.02MB 564720d6ed13 Downloading [================================================> ] 3.011kB/3.089kB 564720d6ed13 Downloading [==================================================>] 3.089kB/3.089kB 564720d6ed13 Verifying Checksum 564720d6ed13 Download complete 1fd5d47e09da Downloading [=====================================> ] 3.011kB/4.022kB 1fd5d47e09da Downloading [==================================================>] 4.022kB/4.022kB 1fd5d47e09da Verifying Checksum 1fd5d47e09da Download complete 1afe4a0d7329 Downloading [==================================================>] 1.438kB/1.438kB 1afe4a0d7329 Verifying Checksum 1afe4a0d7329 Download complete bd55ccfa5aad Downloading [=> ] 3.009kB/138.1kB 44779101e748 Pull complete bd55ccfa5aad Download complete a721db3e3f3d Extracting [> ] 65.54kB/5.526MB a40760cd2625 Downloading [============> ] 21.09MB/84.46MB 8be4b7271108 Downloading [===========> ] 11.53MB/50.82MB 54f884861fc1 Downloading [==================================================>] 100B/100B 54f884861fc1 Verifying Checksum 54f884861fc1 Download complete f270a5fd7930 Extracting [============================================> ] 140.9MB/159.1MB 02203e3d6934 Extracting [=======================> ] 26.74MB/56.02MB 353af139d39e Extracting [> ] 557.1kB/246.5MB b09316e948c6 Downloading [==================================================>] 719B/719B b09316e948c6 Verifying Checksum b09316e948c6 Download complete 4abcf2066143 Downloading [> ] 48.06kB/3.409MB a40760cd2625 Downloading [==================> ] 31.9MB/84.46MB 8be4b7271108 Downloading [====================> ] 20.45MB/50.82MB f270a5fd7930 Extracting [===============================================> ] 150.4MB/159.1MB 02203e3d6934 Extracting [===========================> ] 31.2MB/56.02MB a721db3e3f3d Extracting [==> ] 262.1kB/5.526MB 353af139d39e Extracting [> ] 2.228MB/246.5MB 4abcf2066143 Downloading [=========================================> ] 2.85MB/3.409MB 4abcf2066143 Verifying Checksum 4abcf2066143 Download complete 4abcf2066143 Extracting [> ] 65.54kB/3.409MB 39aee5fd3406 Downloading [==================================================>] 142B/142B 39aee5fd3406 Verifying Checksum 39aee5fd3406 Download complete a40760cd2625 Downloading [==========================> ] 44.33MB/84.46MB 8be4b7271108 Downloading [===============================> ] 32.5MB/50.82MB f270a5fd7930 Extracting [=================================================> ] 156.5MB/159.1MB 02203e3d6934 Extracting [=====================================> ] 41.78MB/56.02MB 592f1e71407c Downloading [> ] 48.06kB/3.184MB a721db3e3f3d Extracting [==================================> ] 3.801MB/5.526MB 353af139d39e Extracting [==> ] 12.81MB/246.5MB f270a5fd7930 Extracting [==================================================>] 159.1MB/159.1MB 4abcf2066143 Extracting [===========> ] 786.4kB/3.409MB 592f1e71407c Downloading [==================> ] 1.195MB/3.184MB 02203e3d6934 Extracting [==========================================> ] 47.35MB/56.02MB 8be4b7271108 Downloading [=========================================> ] 41.94MB/50.82MB 353af139d39e Extracting [===> ] 16.71MB/246.5MB a40760cd2625 Downloading [================================> ] 55.15MB/84.46MB f270a5fd7930 Pull complete 9038eaba24f8 Extracting [==================================================>] 1.153kB/1.153kB 9038eaba24f8 Extracting [==================================================>] 1.153kB/1.153kB 4abcf2066143 Extracting [==============================> ] 2.097MB/3.409MB a721db3e3f3d Extracting [========================================> ] 4.522MB/5.526MB 8be4b7271108 Verifying Checksum 8be4b7271108 Download complete 4abcf2066143 Extracting [==================================================>] 3.409MB/3.409MB 592f1e71407c Downloading [===============================================> ] 2.997MB/3.184MB 592f1e71407c Verifying Checksum 592f1e71407c Download complete 02203e3d6934 Extracting [================================================> ] 54.59MB/56.02MB 353af139d39e Extracting [====> ] 23.4MB/246.5MB a40760cd2625 Downloading [=======================================> ] 67.04MB/84.46MB 66aec874ce0c Downloading [> ] 48.06kB/4.333MB bde37282dfba Downloading [==> ] 3.01kB/51.13kB bde37282dfba Downloading [==================================================>] 51.13kB/51.13kB bde37282dfba Verifying Checksum bde37282dfba Download complete b6982d0733af Downloading [=====> ] 3.01kB/25.99kB b6982d0733af Downloading [==================================================>] 25.99kB/25.99kB b6982d0733af Verifying Checksum b6982d0733af Download complete a721db3e3f3d Extracting [===========================================> ] 4.784MB/5.526MB 4abcf2066143 Pull complete 39aee5fd3406 Extracting [==================================================>] 142B/142B 39aee5fd3406 Extracting [==================================================>] 142B/142B ab3c28da242b Downloading [> ] 539.6kB/65.84MB 9038eaba24f8 Pull complete 04a7796b82ca Extracting [==================================================>] 1.127kB/1.127kB 04a7796b82ca Extracting [==================================================>] 1.127kB/1.127kB a40760cd2625 Downloading [============================================> ] 75.69MB/84.46MB 353af139d39e Extracting [=====> ] 28.97MB/246.5MB 66aec874ce0c Downloading [=================================> ] 2.899MB/4.333MB 02203e3d6934 Extracting [=================================================> ] 55.71MB/56.02MB a721db3e3f3d Extracting [==================================================>] 5.526MB/5.526MB 66aec874ce0c Verifying Checksum 66aec874ce0c Download complete a40760cd2625 Verifying Checksum a40760cd2625 Download complete ef2b3f3f597e Downloading [============> ] 3.01kB/11.92kB ef2b3f3f597e Downloading [==================================================>] 11.92kB/11.92kB ef2b3f3f597e Verifying Checksum ef2b3f3f597e Download complete 02203e3d6934 Extracting [==================================================>] 56.02MB/56.02MB e4892977d944 Downloading [> ] 523.2kB/51.58MB 27a3c8ebdfbf Downloading [==================================================>] 1.227kB/1.227kB 27a3c8ebdfbf Verifying Checksum 27a3c8ebdfbf Download complete 39aee5fd3406 Pull complete 353af139d39e Extracting [=======> ] 35.09MB/246.5MB ab3c28da242b Downloading [====> ] 5.946MB/65.84MB 592f1e71407c Extracting [> ] 32.77kB/3.184MB a721db3e3f3d Pull complete a40760cd2625 Extracting [> ] 557.1kB/84.46MB 1850a929b84a Extracting [==================================================>] 149B/149B 1850a929b84a Extracting [==================================================>] 149B/149B 02203e3d6934 Pull complete 04a7796b82ca Pull complete e4892977d944 Downloading [===> ] 3.145MB/51.58MB simulator Pulled 4798a7e93601 Downloading [> ] 376.8kB/37.11MB 4798a7e93601 Downloading [> ] 376.8kB/37.11MB ab3c28da242b Downloading [==========> ] 13.52MB/65.84MB 592f1e71407c Extracting [===========> ] 753.7kB/3.184MB 353af139d39e Extracting [========> ] 40.67MB/246.5MB a40760cd2625 Extracting [===> ] 5.571MB/84.46MB 1850a929b84a Pull complete 397a918c7da3 Extracting [==================================================>] 327B/327B 397a918c7da3 Extracting [==================================================>] 327B/327B e4892977d944 Downloading [=========> ] 9.436MB/51.58MB 8be4b7271108 Extracting [> ] 524.3kB/50.82MB 4798a7e93601 Downloading [=============> ] 9.784MB/37.11MB 4798a7e93601 Downloading [=============> ] 9.784MB/37.11MB 592f1e71407c Extracting [=================================================> ] 3.178MB/3.184MB ab3c28da242b Downloading [==================> ] 24.33MB/65.84MB 353af139d39e Extracting [=========> ] 45.68MB/246.5MB a40760cd2625 Extracting [=======> ] 12.81MB/84.46MB 592f1e71407c Extracting [==================================================>] 3.184MB/3.184MB e4892977d944 Downloading [====================> ] 20.97MB/51.58MB 8be4b7271108 Extracting [====> ] 4.719MB/50.82MB 4798a7e93601 Downloading [=================================> ] 24.9MB/37.11MB 4798a7e93601 Downloading [=================================> ] 24.9MB/37.11MB ab3c28da242b Downloading [=============================> ] 38.93MB/65.84MB a40760cd2625 Extracting [==========> ] 18.38MB/84.46MB 353af139d39e Extracting [==========> ] 53.48MB/246.5MB 592f1e71407c Pull complete 397a918c7da3 Pull complete 66aec874ce0c Extracting [> ] 65.54kB/4.333MB e4892977d944 Downloading [===========================> ] 28.83MB/51.58MB 8be4b7271108 Extracting [======> ] 6.291MB/50.82MB 4798a7e93601 Downloading [=============================================> ] 33.59MB/37.11MB 4798a7e93601 Downloading [=============================================> ] 33.59MB/37.11MB 4798a7e93601 Verifying Checksum 4798a7e93601 Verifying Checksum 4798a7e93601 Download complete 4798a7e93601 Download complete 353af139d39e Extracting [============> ] 59.6MB/246.5MB ab3c28da242b Downloading [======================================> ] 50.28MB/65.84MB a40760cd2625 Extracting [==============> ] 25.07MB/84.46MB e4892977d944 Downloading [========================================> ] 41.42MB/51.58MB 66aec874ce0c Extracting [===> ] 262.1kB/4.333MB 8be4b7271108 Extracting [==========> ] 10.49MB/50.82MB 806be17e856d Extracting [> ] 557.1kB/89.72MB a453f30e82bf Downloading [> ] 535.8kB/257.5MB a453f30e82bf Downloading [> ] 535.8kB/257.5MB 4798a7e93601 Extracting [> ] 393.2kB/37.11MB 4798a7e93601 Extracting [> ] 393.2kB/37.11MB 353af139d39e Extracting [=============> ] 67.96MB/246.5MB a40760cd2625 Extracting [=================> ] 29.52MB/84.46MB ab3c28da242b Downloading [================================================> ] 63.8MB/65.84MB e4892977d944 Downloading [=================================================> ] 51.38MB/51.58MB e4892977d944 Verifying Checksum e4892977d944 Download complete ab3c28da242b Verifying Checksum ab3c28da242b Download complete 66aec874ce0c Extracting [============================================> ] 3.867MB/4.333MB 66aec874ce0c Extracting [==================================================>] 4.333MB/4.333MB 806be17e856d Extracting [=> ] 3.342MB/89.72MB 8be4b7271108 Extracting [=============> ] 13.63MB/50.82MB a453f30e82bf Downloading [==> ] 12.36MB/257.5MB a453f30e82bf Downloading [==> ] 12.36MB/257.5MB 016e383f3f47 Downloading [================================> ] 720B/1.102kB 016e383f3f47 Downloading [================================> ] 720B/1.102kB 016e383f3f47 Download complete 016e383f3f47 Download complete 66aec874ce0c Pull complete 353af139d39e Extracting [===============> ] 76.32MB/246.5MB 4798a7e93601 Extracting [=====> ] 4.325MB/37.11MB 4798a7e93601 Extracting [=====> ] 4.325MB/37.11MB bde37282dfba Extracting [================================> ] 32.77kB/51.13kB bde37282dfba Extracting [==================================================>] 51.13kB/51.13kB f7d27dafad0a Downloading [> ] 85.77kB/8.351MB f7d27dafad0a Downloading [> ] 85.77kB/8.351MB a40760cd2625 Extracting [=====================> ] 36.21MB/84.46MB 8be4b7271108 Extracting [==============> ] 15.2MB/50.82MB a453f30e82bf Downloading [===> ] 18.28MB/257.5MB a453f30e82bf Downloading [===> ] 18.28MB/257.5MB 806be17e856d Extracting [==> ] 5.014MB/89.72MB 56ccc8be1ca0 Downloading [=> ] 687B/21.29kB 56ccc8be1ca0 Downloading [=> ] 687B/21.29kB 56ccc8be1ca0 Verifying Checksum 56ccc8be1ca0 Download complete 56ccc8be1ca0 Verifying Checksum 56ccc8be1ca0 Download complete f7d27dafad0a Verifying Checksum f7d27dafad0a Verifying Checksum f7d27dafad0a Download complete f7d27dafad0a Download complete 4798a7e93601 Extracting [==========> ] 7.471MB/37.11MB 4798a7e93601 Extracting [==========> ] 7.471MB/37.11MB 353af139d39e Extracting [================> ] 82.44MB/246.5MB a40760cd2625 Extracting [========================> ] 41.22MB/84.46MB a453f30e82bf Downloading [======> ] 31.18MB/257.5MB a453f30e82bf Downloading [======> ] 31.18MB/257.5MB 8be4b7271108 Extracting [=================> ] 17.83MB/50.82MB 806be17e856d Extracting [====> ] 7.242MB/89.72MB 1c6e35a73ed7 Downloading [================================> ] 720B/1.105kB 1c6e35a73ed7 Downloading [================================> ] 720B/1.105kB 353af139d39e Extracting [=================> ] 86.9MB/246.5MB 1c6e35a73ed7 Download complete 1c6e35a73ed7 Download complete a40760cd2625 Extracting [==========================> ] 44.56MB/84.46MB bde37282dfba Pull complete f77f01ac624c Downloading [> ] 442.4kB/43.2MB f77f01ac624c Downloading [> ] 442.4kB/43.2MB b6982d0733af Extracting [==================================================>] 25.99kB/25.99kB b6982d0733af Extracting [==================================================>] 25.99kB/25.99kB a453f30e82bf Downloading [======> ] 36.02MB/257.5MB a453f30e82bf Downloading [======> ] 36.02MB/257.5MB 806be17e856d Extracting [====> ] 8.356MB/89.72MB 4798a7e93601 Extracting [=============> ] 9.83MB/37.11MB 4798a7e93601 Extracting [=============> ] 9.83MB/37.11MB 8be4b7271108 Extracting [==================> ] 18.35MB/50.82MB 353af139d39e Extracting [==================> ] 93.03MB/246.5MB a40760cd2625 Extracting [==============================> ] 50.69MB/84.46MB f77f01ac624c Downloading [==============> ] 12.36MB/43.2MB f77f01ac624c Downloading [==============> ] 12.36MB/43.2MB aa5e151b62ff Downloading [=========================================> ] 709B/853B aa5e151b62ff Downloading [=========================================> ] 709B/853B aa5e151b62ff Downloading [==================================================>] 853B/853B aa5e151b62ff Verifying Checksum aa5e151b62ff Download complete aa5e151b62ff Verifying Checksum aa5e151b62ff Download complete a453f30e82bf Downloading [=========> ] 48.9MB/257.5MB a453f30e82bf Downloading [=========> ] 48.9MB/257.5MB b6982d0733af Pull complete 806be17e856d Extracting [=====> ] 10.58MB/89.72MB 4798a7e93601 Extracting [================> ] 12.58MB/37.11MB 4798a7e93601 Extracting [================> ] 12.58MB/37.11MB 8be4b7271108 Extracting [======================> ] 23.07MB/50.82MB 353af139d39e Extracting [====================> ] 101.4MB/246.5MB a40760cd2625 Extracting [==================================> ] 58.49MB/84.46MB f77f01ac624c Downloading [=============================> ] 25.61MB/43.2MB f77f01ac624c Downloading [=============================> ] 25.61MB/43.2MB 262d375318c3 Downloading [==================================================>] 98B/98B 262d375318c3 Downloading [==================================================>] 98B/98B 262d375318c3 Verifying Checksum 262d375318c3 Verifying Checksum 262d375318c3 Download complete a453f30e82bf Downloading [===========> ] 56.93MB/257.5MB a453f30e82bf Downloading [===========> ] 56.93MB/257.5MB 262d375318c3 Download complete 4798a7e93601 Extracting [====================> ] 15.34MB/37.11MB 4798a7e93601 Extracting [====================> ] 15.34MB/37.11MB 806be17e856d Extracting [=======> ] 12.81MB/89.72MB 8be4b7271108 Extracting [===========================> ] 27.79MB/50.82MB 353af139d39e Extracting [=====================> ] 108.1MB/246.5MB 28a7d18ebda4 Downloading [==================================================>] 173B/173B 28a7d18ebda4 Downloading [==================================================>] 173B/173B 28a7d18ebda4 Verifying Checksum 28a7d18ebda4 Download complete 28a7d18ebda4 Verifying Checksum 28a7d18ebda4 Download complete ab3c28da242b Extracting [> ] 557.1kB/65.84MB a40760cd2625 Extracting [=======================================> ] 67.4MB/84.46MB f77f01ac624c Downloading [============================================> ] 38.41MB/43.2MB f77f01ac624c Downloading [============================================> ] 38.41MB/43.2MB a453f30e82bf Downloading [=============> ] 69.25MB/257.5MB a453f30e82bf Downloading [=============> ] 69.25MB/257.5MB f77f01ac624c Verifying Checksum f77f01ac624c Download complete f77f01ac624c Download complete 806be17e856d Extracting [========> ] 15.6MB/89.72MB 4798a7e93601 Extracting [=========================> ] 19.27MB/37.11MB 4798a7e93601 Extracting [=========================> ] 19.27MB/37.11MB 8be4b7271108 Extracting [======================================> ] 39.32MB/50.82MB 353af139d39e Extracting [=======================> ] 115.3MB/246.5MB bdc615dfc787 Downloading [> ] 3.423kB/230.6kB bdc615dfc787 Downloading [> ] 3.423kB/230.6kB a40760cd2625 Extracting [=============================================> ] 76.32MB/84.46MB bdc615dfc787 Verifying Checksum bdc615dfc787 Verifying Checksum bdc615dfc787 Download complete bdc615dfc787 Download complete ab3c28da242b Extracting [==> ] 3.899MB/65.84MB a453f30e82bf Downloading [===============> ] 78.39MB/257.5MB a453f30e82bf Downloading [===============> ] 78.39MB/257.5MB 33966fd36306 Downloading [> ] 535.8kB/121.6MB 4798a7e93601 Extracting [=============================> ] 21.63MB/37.11MB 4798a7e93601 Extracting [=============================> ] 21.63MB/37.11MB 8be4b7271108 Extracting [=================================================> ] 49.81MB/50.82MB 806be17e856d Extracting [==========> ] 18.94MB/89.72MB a40760cd2625 Extracting [==================================================>] 84.46MB/84.46MB 8b4455fb60b9 Downloading [=========> ] 720B/3.627kB 8b4455fb60b9 Downloading [==================================================>] 3.627kB/3.627kB 8b4455fb60b9 Verifying Checksum 8b4455fb60b9 Download complete 353af139d39e Extracting [=========================> ] 124.8MB/246.5MB ab3c28da242b Extracting [=====> ] 7.799MB/65.84MB a453f30e82bf Downloading [=================> ] 88.6MB/257.5MB a453f30e82bf Downloading [=================> ] 88.6MB/257.5MB a40760cd2625 Pull complete 114f99593bd8 Extracting [==================================================>] 1.119kB/1.119kB 114f99593bd8 Extracting [==================================================>] 1.119kB/1.119kB 33966fd36306 Downloading [===> ] 9.136MB/121.6MB 4798a7e93601 Extracting [==================================> ] 25.95MB/37.11MB 4798a7e93601 Extracting [==================================> ] 25.95MB/37.11MB 8be4b7271108 Extracting [==================================================>] 50.82MB/50.82MB 353af139d39e Extracting [===========================> ] 135.4MB/246.5MB 806be17e856d Extracting [===========> ] 21.17MB/89.72MB ab973a5038b6 Downloading [> ] 526.6kB/121.6MB 8be4b7271108 Pull complete a453f30e82bf Downloading [===================> ] 98.79MB/257.5MB a453f30e82bf Downloading [===================> ] 98.79MB/257.5MB 8becc689631f Extracting [==================================================>] 604B/604B 8becc689631f Extracting [==================================================>] 604B/604B 33966fd36306 Downloading [=========> ] 22.56MB/121.6MB 4798a7e93601 Extracting [=======================================> ] 29.49MB/37.11MB 4798a7e93601 Extracting [=======================================> ] 29.49MB/37.11MB ab3c28da242b Extracting [========> ] 10.58MB/65.84MB 353af139d39e Extracting [=============================> ] 143.2MB/246.5MB 806be17e856d Extracting [=============> ] 23.4MB/89.72MB ab973a5038b6 Downloading [===> ] 9.079MB/121.6MB 114f99593bd8 Pull complete api Pulled a453f30e82bf Downloading [=====================> ] 113.3MB/257.5MB a453f30e82bf Downloading [=====================> ] 113.3MB/257.5MB 33966fd36306 Downloading [===============> ] 37.1MB/121.6MB 8becc689631f Pull complete 4798a7e93601 Extracting [============================================> ] 33.03MB/37.11MB 4798a7e93601 Extracting [============================================> ] 33.03MB/37.11MB ceaeea15c1bf Extracting [==================================================>] 2.678kB/2.678kB ceaeea15c1bf Extracting [==================================================>] 2.678kB/2.678kB 806be17e856d Extracting [=============> ] 24.51MB/89.72MB 353af139d39e Extracting [==============================> ] 148.7MB/246.5MB ab973a5038b6 Downloading [=======> ] 18.18MB/121.6MB ab3c28da242b Extracting [=========> ] 12.26MB/65.84MB a453f30e82bf Downloading [=======================> ] 122.4MB/257.5MB a453f30e82bf Downloading [=======================> ] 122.4MB/257.5MB 33966fd36306 Downloading [===================> ] 46.8MB/121.6MB 4798a7e93601 Extracting [==============================================> ] 34.6MB/37.11MB 4798a7e93601 Extracting [==============================================> ] 34.6MB/37.11MB 806be17e856d Extracting [==============> ] 26.18MB/89.72MB ab973a5038b6 Downloading [=============> ] 33.19MB/121.6MB 353af139d39e Extracting [===============================> ] 157.1MB/246.5MB ab3c28da242b Extracting [==========> ] 13.93MB/65.84MB a453f30e82bf Downloading [==========================> ] 137.5MB/257.5MB a453f30e82bf Downloading [==========================> ] 137.5MB/257.5MB 33966fd36306 Downloading [=======================> ] 58.06MB/121.6MB ab973a5038b6 Downloading [==============> ] 34.81MB/121.6MB 353af139d39e Extracting [================================> ] 159.9MB/246.5MB ceaeea15c1bf Pull complete 4798a7e93601 Extracting [===============================================> ] 35MB/37.11MB 4798a7e93601 Extracting [===============================================> ] 35MB/37.11MB 564720d6ed13 Extracting [==================================================>] 3.089kB/3.089kB 564720d6ed13 Extracting [==================================================>] 3.089kB/3.089kB a453f30e82bf Downloading [===========================> ] 140.7MB/257.5MB a453f30e82bf Downloading [===========================> ] 140.7MB/257.5MB ab3c28da242b Extracting [===========> ] 15.6MB/65.84MB 806be17e856d Extracting [===============> ] 27.85MB/89.72MB 4798a7e93601 Extracting [==================================================>] 37.11MB/37.11MB 4798a7e93601 Extracting [==================================================>] 37.11MB/37.11MB ab973a5038b6 Downloading [==================> ] 44.46MB/121.6MB 33966fd36306 Downloading [==============================> ] 73.67MB/121.6MB 353af139d39e Extracting [==================================> ] 167.7MB/246.5MB a453f30e82bf Downloading [==============================> ] 156.8MB/257.5MB a453f30e82bf Downloading [==============================> ] 156.8MB/257.5MB ab3c28da242b Extracting [=============> ] 18.38MB/65.84MB 806be17e856d Extracting [================> ] 29.52MB/89.72MB 4798a7e93601 Pull complete 4798a7e93601 Pull complete 564720d6ed13 Pull complete 1fd5d47e09da Extracting [==================================================>] 4.022kB/4.022kB 1fd5d47e09da Extracting [==================================================>] 4.022kB/4.022kB 33966fd36306 Downloading [====================================> ] 87.57MB/121.6MB ab973a5038b6 Downloading [========================> ] 60.64MB/121.6MB a453f30e82bf Downloading [================================> ] 168.6MB/257.5MB a453f30e82bf Downloading [================================> ] 168.6MB/257.5MB 353af139d39e Extracting [===================================> ] 176MB/246.5MB ab3c28da242b Extracting [================> ] 21.73MB/65.84MB 806be17e856d Extracting [=================> ] 31.2MB/89.72MB 33966fd36306 Downloading [=========================================> ] 100.4MB/121.6MB a453f30e82bf Downloading [===================================> ] 183.7MB/257.5MB a453f30e82bf Downloading [===================================> ] 183.7MB/257.5MB 1fd5d47e09da Pull complete 353af139d39e Extracting [=====================================> ] 186.1MB/246.5MB 1afe4a0d7329 Extracting [==================================================>] 1.438kB/1.438kB 1afe4a0d7329 Extracting [==================================================>] 1.438kB/1.438kB ab973a5038b6 Downloading [================================> ] 78.35MB/121.6MB ab3c28da242b Extracting [===================> ] 26.18MB/65.84MB 806be17e856d Extracting [==================> ] 33.98MB/89.72MB 33966fd36306 Downloading [===============================================> ] 115.4MB/121.6MB a453f30e82bf Downloading [======================================> ] 197.7MB/257.5MB a453f30e82bf Downloading [======================================> ] 197.7MB/257.5MB 353af139d39e Extracting [=======================================> ] 194.4MB/246.5MB ab973a5038b6 Downloading [====================================> ] 89.66MB/121.6MB 33966fd36306 Verifying Checksum 33966fd36306 Download complete ab3c28da242b Extracting [=====================> ] 28.97MB/65.84MB 1afe4a0d7329 Pull complete bd55ccfa5aad Extracting [===========> ] 32.77kB/138.1kB bd55ccfa5aad Extracting [==================================================>] 138.1kB/138.1kB bd55ccfa5aad Extracting [==================================================>] 138.1kB/138.1kB 806be17e856d Extracting [====================> ] 36.21MB/89.72MB a453f30e82bf Downloading [========================================> ] 210MB/257.5MB a453f30e82bf Downloading [========================================> ] 210MB/257.5MB 353af139d39e Extracting [=========================================> ] 203.3MB/246.5MB ab3c28da242b Extracting [========================> ] 32.87MB/65.84MB ab973a5038b6 Downloading [==========================================> ] 104.1MB/121.6MB 806be17e856d Extracting [======================> ] 39.55MB/89.72MB 5aee3e0528f7 Downloading [==========> ] 720B/3.445kB 5aee3e0528f7 Downloading [==================================================>] 3.445kB/3.445kB 5aee3e0528f7 Verifying Checksum 5aee3e0528f7 Download complete a453f30e82bf Downloading [===========================================> ] 224.5MB/257.5MB a453f30e82bf Downloading [===========================================> ] 224.5MB/257.5MB 353af139d39e Extracting [===========================================> ] 213.9MB/246.5MB ab3c28da242b Extracting [============================> ] 37.32MB/65.84MB 806be17e856d Extracting [=======================> ] 42.34MB/89.72MB ab973a5038b6 Downloading [================================================> ] 118.1MB/121.6MB 353af139d39e Extracting [============================================> ] 218.9MB/246.5MB ab973a5038b6 Verifying Checksum ab973a5038b6 Download complete a453f30e82bf Downloading [=============================================> ] 234.8MB/257.5MB a453f30e82bf Downloading [=============================================> ] 234.8MB/257.5MB ab3c28da242b Extracting [=============================> ] 38.44MB/65.84MB 353af139d39e Extracting [============================================> ] 219.5MB/246.5MB 806be17e856d Extracting [=======================> ] 42.89MB/89.72MB bd55ccfa5aad Pull complete ab3c28da242b Extracting [===============================> ] 41.22MB/65.84MB a453f30e82bf Downloading [===============================================> ] 243.9MB/257.5MB a453f30e82bf Downloading [===============================================> ] 243.9MB/257.5MB 806be17e856d Extracting [========================> ] 44.56MB/89.72MB 353af139d39e Extracting [=============================================> ] 225.1MB/246.5MB ab3c28da242b Extracting [==================================> ] 45.68MB/65.84MB a453f30e82bf Verifying Checksum a453f30e82bf Download complete a453f30e82bf Verifying Checksum a453f30e82bf Download complete 806be17e856d Extracting [==========================> ] 47.91MB/89.72MB 353af139d39e Extracting [===============================================> ] 231.7MB/246.5MB ab3c28da242b Extracting [=====================================> ] 49.02MB/65.84MB 806be17e856d Extracting [============================> ] 51.81MB/89.72MB 353af139d39e Extracting [================================================> ] 237.3MB/246.5MB a453f30e82bf Extracting [> ] 557.1kB/257.5MB a453f30e82bf Extracting [> ] 557.1kB/257.5MB 54f884861fc1 Extracting [==================================================>] 100B/100B 54f884861fc1 Extracting [==================================================>] 100B/100B 353af139d39e Extracting [==================================================>] 246.5MB/246.5MB ab3c28da242b Extracting [=======================================> ] 51.81MB/65.84MB 806be17e856d Extracting [===============================> ] 56.82MB/89.72MB a453f30e82bf Extracting [=> ] 9.47MB/257.5MB a453f30e82bf Extracting [=> ] 9.47MB/257.5MB ab3c28da242b Extracting [=========================================> ] 54.59MB/65.84MB a453f30e82bf Extracting [===> ] 16.71MB/257.5MB a453f30e82bf Extracting [===> ] 16.71MB/257.5MB 806be17e856d Extracting [=================================> ] 59.6MB/89.72MB 54f884861fc1 Pull complete 353af139d39e Pull complete b09316e948c6 Extracting [==================================================>] 719B/719B b09316e948c6 Extracting [==================================================>] 719B/719B ab3c28da242b Extracting [===========================================> ] 57.38MB/65.84MB 806be17e856d Extracting [==================================> ] 61.28MB/89.72MB a453f30e82bf Extracting [====> ] 21.73MB/257.5MB a453f30e82bf Extracting [====> ] 21.73MB/257.5MB apex-pdp Pulled ab3c28da242b Extracting [===============================================> ] 62.39MB/65.84MB 806be17e856d Extracting [====================================> ] 65.73MB/89.72MB a453f30e82bf Extracting [=====> ] 27.3MB/257.5MB a453f30e82bf Extracting [=====> ] 27.3MB/257.5MB a453f30e82bf Extracting [=====> ] 27.85MB/257.5MB a453f30e82bf Extracting [=====> ] 27.85MB/257.5MB 806be17e856d Extracting [=====================================> ] 67.96MB/89.72MB ab3c28da242b Extracting [================================================> ] 63.5MB/65.84MB b09316e948c6 Pull complete 806be17e856d Extracting [======================================> ] 68.52MB/89.72MB a453f30e82bf Extracting [======> ] 31.75MB/257.5MB a453f30e82bf Extracting [======> ] 31.75MB/257.5MB ab3c28da242b Extracting [=================================================> ] 64.62MB/65.84MB ab3c28da242b Extracting [==================================================>] 65.84MB/65.84MB 806be17e856d Extracting [=======================================> ] 70.75MB/89.72MB a453f30e82bf Extracting [======> ] 35.09MB/257.5MB a453f30e82bf Extracting [======> ] 35.09MB/257.5MB a453f30e82bf Extracting [=========> ] 49.02MB/257.5MB a453f30e82bf Extracting [=========> ] 49.02MB/257.5MB 806be17e856d Extracting [========================================> ] 73.53MB/89.72MB a453f30e82bf Extracting [============> ] 62.95MB/257.5MB a453f30e82bf Extracting [============> ] 62.95MB/257.5MB 806be17e856d Extracting [===========================================> ] 77.43MB/89.72MB a453f30e82bf Extracting [==============> ] 72.97MB/257.5MB a453f30e82bf Extracting [==============> ] 72.97MB/257.5MB 806be17e856d Extracting [=============================================> ] 81.89MB/89.72MB a453f30e82bf Extracting [================> ] 87.46MB/257.5MB a453f30e82bf Extracting [================> ] 87.46MB/257.5MB 806be17e856d Extracting [==============================================> ] 84.12MB/89.72MB a453f30e82bf Extracting [==================> ] 96.93MB/257.5MB a453f30e82bf Extracting [==================> ] 96.93MB/257.5MB 806be17e856d Extracting [================================================> ] 86.9MB/89.72MB a453f30e82bf Extracting [====================> ] 104.2MB/257.5MB a453f30e82bf Extracting [====================> ] 104.2MB/257.5MB 806be17e856d Extracting [=================================================> ] 89.13MB/89.72MB a453f30e82bf Extracting [=====================> ] 110.9MB/257.5MB a453f30e82bf Extracting [=====================> ] 110.9MB/257.5MB 806be17e856d Extracting [==================================================>] 89.72MB/89.72MB a453f30e82bf Extracting [=====================> ] 113.1MB/257.5MB a453f30e82bf Extracting [=====================> ] 113.1MB/257.5MB prometheus Pulled ab3c28da242b Pull complete 806be17e856d Pull complete 634de6c90876 Extracting [==================================================>] 3.49kB/3.49kB a453f30e82bf Extracting [======================> ] 116.4MB/257.5MB a453f30e82bf Extracting [======================> ] 116.4MB/257.5MB 634de6c90876 Extracting [==================================================>] 3.49kB/3.49kB 634de6c90876 Pull complete cd00854cfb1a Extracting [==================================================>] 6.971kB/6.971kB cd00854cfb1a Extracting [==================================================>] 6.971kB/6.971kB e4892977d944 Extracting [> ] 524.3kB/51.58MB a453f30e82bf Extracting [=======================> ] 121.4MB/257.5MB a453f30e82bf Extracting [=======================> ] 121.4MB/257.5MB cd00854cfb1a Pull complete a453f30e82bf Extracting [========================> ] 124.8MB/257.5MB a453f30e82bf Extracting [========================> ] 124.8MB/257.5MB mariadb Pulled e4892977d944 Extracting [=> ] 1.573MB/51.58MB a453f30e82bf Extracting [=========================> ] 130.4MB/257.5MB a453f30e82bf Extracting [=========================> ] 130.4MB/257.5MB e4892977d944 Extracting [==> ] 2.621MB/51.58MB a453f30e82bf Extracting [==========================> ] 136.5MB/257.5MB a453f30e82bf Extracting [==========================> ] 136.5MB/257.5MB a453f30e82bf Extracting [===========================> ] 142.6MB/257.5MB a453f30e82bf Extracting [===========================> ] 142.6MB/257.5MB e4892977d944 Extracting [====> ] 4.194MB/51.58MB a453f30e82bf Extracting [============================> ] 148.2MB/257.5MB a453f30e82bf Extracting [============================> ] 148.2MB/257.5MB e4892977d944 Extracting [======> ] 6.816MB/51.58MB a453f30e82bf Extracting [=============================> ] 152.1MB/257.5MB a453f30e82bf Extracting [=============================> ] 152.1MB/257.5MB e4892977d944 Extracting [========> ] 8.913MB/51.58MB a453f30e82bf Extracting [==============================> ] 156MB/257.5MB a453f30e82bf Extracting [==============================> ] 156MB/257.5MB a453f30e82bf Extracting [==============================> ] 159.3MB/257.5MB a453f30e82bf Extracting [==============================> ] 159.3MB/257.5MB e4892977d944 Extracting [===========> ] 11.53MB/51.58MB a453f30e82bf Extracting [===============================> ] 163.2MB/257.5MB a453f30e82bf Extracting [===============================> ] 163.2MB/257.5MB e4892977d944 Extracting [=============> ] 14.16MB/51.58MB a453f30e82bf Extracting [=================================> ] 170.5MB/257.5MB a453f30e82bf Extracting [=================================> ] 170.5MB/257.5MB e4892977d944 Extracting [================> ] 17.3MB/51.58MB a453f30e82bf Extracting [=================================> ] 173.2MB/257.5MB a453f30e82bf Extracting [=================================> ] 173.2MB/257.5MB e4892977d944 Extracting [===================> ] 19.92MB/51.58MB a453f30e82bf Extracting [=================================> ] 174.9MB/257.5MB a453f30e82bf Extracting [=================================> ] 174.9MB/257.5MB e4892977d944 Extracting [=======================> ] 24.12MB/51.58MB a453f30e82bf Extracting [==================================> ] 176MB/257.5MB a453f30e82bf Extracting [==================================> ] 176MB/257.5MB e4892977d944 Extracting [==========================> ] 27.79MB/51.58MB a453f30e82bf Extracting [==================================> ] 177.1MB/257.5MB a453f30e82bf Extracting [==================================> ] 177.1MB/257.5MB e4892977d944 Extracting [==============================> ] 31.46MB/51.58MB a453f30e82bf Extracting [==================================> ] 179.4MB/257.5MB a453f30e82bf Extracting [==================================> ] 179.4MB/257.5MB e4892977d944 Extracting [==================================> ] 35.13MB/51.58MB a453f30e82bf Extracting [===================================> ] 183.8MB/257.5MB a453f30e82bf Extracting [===================================> ] 183.8MB/257.5MB e4892977d944 Extracting [=====================================> ] 38.27MB/51.58MB a453f30e82bf Extracting [====================================> ] 187.7MB/257.5MB a453f30e82bf Extracting [====================================> ] 187.7MB/257.5MB e4892977d944 Extracting [========================================> ] 41.42MB/51.58MB a453f30e82bf Extracting [=====================================> ] 191.6MB/257.5MB a453f30e82bf Extracting [=====================================> ] 191.6MB/257.5MB e4892977d944 Extracting [==========================================> ] 44.04MB/51.58MB a453f30e82bf Extracting [=====================================> ] 193.9MB/257.5MB a453f30e82bf Extracting [=====================================> ] 193.9MB/257.5MB a453f30e82bf Extracting [=====================================> ] 194.4MB/257.5MB a453f30e82bf Extracting [=====================================> ] 194.4MB/257.5MB a453f30e82bf Extracting [=====================================> ] 195MB/257.5MB a453f30e82bf Extracting [=====================================> ] 195MB/257.5MB e4892977d944 Extracting [================================================> ] 49.81MB/51.58MB e4892977d944 Extracting [==================================================>] 51.58MB/51.58MB a453f30e82bf Extracting [======================================> ] 197.8MB/257.5MB a453f30e82bf Extracting [======================================> ] 197.8MB/257.5MB a453f30e82bf Extracting [======================================> ] 200MB/257.5MB a453f30e82bf Extracting [======================================> ] 200MB/257.5MB a453f30e82bf Extracting [=======================================> ] 202.2MB/257.5MB a453f30e82bf Extracting [=======================================> ] 202.2MB/257.5MB a453f30e82bf Extracting [=======================================> ] 202.8MB/257.5MB a453f30e82bf Extracting [=======================================> ] 202.8MB/257.5MB a453f30e82bf Extracting [=======================================> ] 204.4MB/257.5MB a453f30e82bf Extracting [=======================================> ] 204.4MB/257.5MB a453f30e82bf Extracting [=======================================> ] 205.6MB/257.5MB a453f30e82bf Extracting [=======================================> ] 205.6MB/257.5MB e4892977d944 Pull complete a453f30e82bf Extracting [========================================> ] 208.3MB/257.5MB a453f30e82bf Extracting [========================================> ] 208.3MB/257.5MB a453f30e82bf Extracting [========================================> ] 210.6MB/257.5MB a453f30e82bf Extracting [========================================> ] 210.6MB/257.5MB ef2b3f3f597e Extracting [==================================================>] 11.92kB/11.92kB ef2b3f3f597e Extracting [==================================================>] 11.92kB/11.92kB a453f30e82bf Extracting [=========================================> ] 212.2MB/257.5MB a453f30e82bf Extracting [=========================================> ] 212.2MB/257.5MB a453f30e82bf Extracting [=========================================> ] 215.6MB/257.5MB a453f30e82bf Extracting [=========================================> ] 215.6MB/257.5MB a453f30e82bf Extracting [==========================================> ] 219.5MB/257.5MB a453f30e82bf Extracting [==========================================> ] 219.5MB/257.5MB a453f30e82bf Extracting [===========================================> ] 222.8MB/257.5MB a453f30e82bf Extracting [===========================================> ] 222.8MB/257.5MB a453f30e82bf Extracting [===========================================> ] 225.1MB/257.5MB a453f30e82bf Extracting [===========================================> ] 225.1MB/257.5MB a453f30e82bf Extracting [============================================> ] 226.7MB/257.5MB a453f30e82bf Extracting [============================================> ] 226.7MB/257.5MB a453f30e82bf Extracting [============================================> ] 228.4MB/257.5MB a453f30e82bf Extracting [============================================> ] 228.4MB/257.5MB ef2b3f3f597e Pull complete 27a3c8ebdfbf Extracting [==================================================>] 1.227kB/1.227kB 27a3c8ebdfbf Extracting [==================================================>] 1.227kB/1.227kB a453f30e82bf Extracting [=============================================> ] 233.4MB/257.5MB a453f30e82bf Extracting [=============================================> ] 233.4MB/257.5MB a453f30e82bf Extracting [==============================================> ] 237.3MB/257.5MB a453f30e82bf Extracting [==============================================> ] 237.3MB/257.5MB a453f30e82bf Extracting [==============================================> ] 241.8MB/257.5MB a453f30e82bf Extracting [==============================================> ] 241.8MB/257.5MB a453f30e82bf Extracting [=================================================> ] 252.3MB/257.5MB a453f30e82bf Extracting [=================================================> ] 252.3MB/257.5MB a453f30e82bf Extracting [=================================================> ] 254MB/257.5MB a453f30e82bf Extracting [=================================================> ] 254MB/257.5MB a453f30e82bf Extracting [==================================================>] 257.5MB/257.5MB a453f30e82bf Extracting [==================================================>] 257.5MB/257.5MB 27a3c8ebdfbf Pull complete a453f30e82bf Pull complete a453f30e82bf Pull complete 016e383f3f47 Extracting [==================================================>] 1.102kB/1.102kB 016e383f3f47 Extracting [==================================================>] 1.102kB/1.102kB 016e383f3f47 Extracting [==================================================>] 1.102kB/1.102kB 016e383f3f47 Extracting [==================================================>] 1.102kB/1.102kB grafana Pulled 016e383f3f47 Pull complete 016e383f3f47 Pull complete f7d27dafad0a Extracting [> ] 98.3kB/8.351MB f7d27dafad0a Extracting [> ] 98.3kB/8.351MB f7d27dafad0a Extracting [==> ] 491.5kB/8.351MB f7d27dafad0a Extracting [==> ] 491.5kB/8.351MB f7d27dafad0a Extracting [==================================================>] 8.351MB/8.351MB f7d27dafad0a Extracting [==================================================>] 8.351MB/8.351MB f7d27dafad0a Pull complete f7d27dafad0a Pull complete 56ccc8be1ca0 Extracting [==================================================>] 21.29kB/21.29kB 56ccc8be1ca0 Extracting [==================================================>] 21.29kB/21.29kB 56ccc8be1ca0 Extracting [==================================================>] 21.29kB/21.29kB 56ccc8be1ca0 Extracting [==================================================>] 21.29kB/21.29kB 56ccc8be1ca0 Pull complete 56ccc8be1ca0 Pull complete f77f01ac624c Extracting [> ] 458.8kB/43.2MB f77f01ac624c Extracting [> ] 458.8kB/43.2MB f77f01ac624c Extracting [=================> ] 15.14MB/43.2MB f77f01ac624c Extracting [=================> ] 15.14MB/43.2MB f77f01ac624c Extracting [=====================================> ] 32.57MB/43.2MB f77f01ac624c Extracting [=====================================> ] 32.57MB/43.2MB f77f01ac624c Extracting [==================================================>] 43.2MB/43.2MB f77f01ac624c Extracting [==================================================>] 43.2MB/43.2MB f77f01ac624c Pull complete f77f01ac624c Pull complete 1c6e35a73ed7 Extracting [==================================================>] 1.105kB/1.105kB 1c6e35a73ed7 Extracting [==================================================>] 1.105kB/1.105kB 1c6e35a73ed7 Extracting [==================================================>] 1.105kB/1.105kB 1c6e35a73ed7 Extracting [==================================================>] 1.105kB/1.105kB 1c6e35a73ed7 Pull complete 1c6e35a73ed7 Pull complete aa5e151b62ff Extracting [==================================================>] 853B/853B aa5e151b62ff Extracting [==================================================>] 853B/853B aa5e151b62ff Extracting [==================================================>] 853B/853B aa5e151b62ff Extracting [==================================================>] 853B/853B aa5e151b62ff Pull complete aa5e151b62ff Pull complete 262d375318c3 Extracting [==================================================>] 98B/98B 262d375318c3 Extracting [==================================================>] 98B/98B 262d375318c3 Extracting [==================================================>] 98B/98B 262d375318c3 Extracting [==================================================>] 98B/98B 262d375318c3 Pull complete 262d375318c3 Pull complete 28a7d18ebda4 Extracting [==================================================>] 173B/173B 28a7d18ebda4 Extracting [==================================================>] 173B/173B 28a7d18ebda4 Extracting [==================================================>] 173B/173B 28a7d18ebda4 Extracting [==================================================>] 173B/173B 28a7d18ebda4 Pull complete 28a7d18ebda4 Pull complete bdc615dfc787 Extracting [=======> ] 32.77kB/230.6kB bdc615dfc787 Extracting [=======> ] 32.77kB/230.6kB bdc615dfc787 Extracting [==================================================>] 230.6kB/230.6kB bdc615dfc787 Extracting [==================================================>] 230.6kB/230.6kB bdc615dfc787 Pull complete bdc615dfc787 Pull complete ab973a5038b6 Extracting [> ] 557.1kB/121.6MB 33966fd36306 Extracting [> ] 557.1kB/121.6MB ab973a5038b6 Extracting [====> ] 10.58MB/121.6MB 33966fd36306 Extracting [====> ] 11.14MB/121.6MB ab973a5038b6 Extracting [========> ] 19.5MB/121.6MB 33966fd36306 Extracting [==========> ] 24.51MB/121.6MB ab973a5038b6 Extracting [============> ] 31.2MB/121.6MB 33966fd36306 Extracting [===============> ] 37.88MB/121.6MB ab973a5038b6 Extracting [===================> ] 46.79MB/121.6MB 33966fd36306 Extracting [=======================> ] 56.26MB/121.6MB ab973a5038b6 Extracting [==========================> ] 64.06MB/121.6MB 33966fd36306 Extracting [==============================> ] 73.53MB/121.6MB ab973a5038b6 Extracting [==============================> ] 74.65MB/121.6MB 33966fd36306 Extracting [===================================> ] 87.46MB/121.6MB ab973a5038b6 Extracting [====================================> ] 89.13MB/121.6MB 33966fd36306 Extracting [==========================================> ] 102.5MB/121.6MB ab973a5038b6 Extracting [============================================> ] 108.6MB/121.6MB 33966fd36306 Extracting [================================================> ] 117MB/121.6MB ab973a5038b6 Extracting [================================================> ] 118.1MB/121.6MB 33966fd36306 Extracting [=================================================> ] 121.4MB/121.6MB 33966fd36306 Extracting [==================================================>] 121.6MB/121.6MB ab973a5038b6 Extracting [==================================================>] 121.6MB/121.6MB 33966fd36306 Pull complete ab973a5038b6 Pull complete 8b4455fb60b9 Extracting [==================================================>] 3.627kB/3.627kB 8b4455fb60b9 Extracting [==================================================>] 3.627kB/3.627kB 5aee3e0528f7 Extracting [==================================================>] 3.445kB/3.445kB 5aee3e0528f7 Extracting [==================================================>] 3.445kB/3.445kB 8b4455fb60b9 Pull complete 5aee3e0528f7 Pull complete zookeeper Pulled kafka Pulled Network compose_default Creating Network compose_default Created Container simulator Creating Container mariadb Creating Container prometheus Creating Container zookeeper Creating Container mariadb Created Container simulator Created Container policy-db-migrator Creating Container prometheus Created Container grafana Creating Container zookeeper Created Container kafka Creating Container policy-db-migrator Created Container policy-api Creating Container kafka Created Container grafana Created Container policy-api Created Container policy-pap Creating Container policy-pap Created Container policy-apex-pdp Creating Container policy-apex-pdp Created Container zookeeper Starting Container prometheus Starting Container simulator Starting Container mariadb Starting Container zookeeper Started Container kafka Starting Container kafka Started Container simulator Started Container mariadb Started Container policy-db-migrator Starting Container policy-db-migrator Started Container policy-api Starting Container policy-api Started Container policy-pap Starting Container policy-pap Started Container policy-apex-pdp Starting Container prometheus Started Container grafana Starting Container grafana Started Container policy-apex-pdp Started Prometheus server: http://localhost:30259 Grafana server: http://localhost:30269 Waiting for REST to come up on localhost port 30003... NAMES STATUS policy-apex-pdp Up 10 seconds policy-pap Up 12 seconds policy-api Up 13 seconds kafka Up 16 seconds grafana Up 11 seconds zookeeper Up 17 seconds simulator Up 16 seconds mariadb Up 15 seconds prometheus Up 12 seconds NAMES STATUS policy-apex-pdp Up 15 seconds policy-pap Up 18 seconds policy-api Up 18 seconds kafka Up 21 seconds grafana Up 16 seconds zookeeper Up 22 seconds simulator Up 21 seconds mariadb Up 20 seconds prometheus Up 17 seconds NAMES STATUS policy-apex-pdp Up 20 seconds policy-pap Up 23 seconds policy-api Up 23 seconds kafka Up 26 seconds grafana Up 21 seconds zookeeper Up 27 seconds simulator Up 26 seconds mariadb Up 25 seconds prometheus Up 22 seconds NAMES STATUS policy-apex-pdp Up 25 seconds policy-pap Up 28 seconds policy-api Up 28 seconds kafka Up 32 seconds grafana Up 26 seconds zookeeper Up 32 seconds simulator Up 31 seconds mariadb Up 30 seconds prometheus Up 27 seconds NAMES STATUS policy-apex-pdp Up 30 seconds policy-pap Up 33 seconds policy-api Up 33 seconds kafka Up 37 seconds grafana Up 31 seconds zookeeper Up 37 seconds simulator Up 36 seconds mariadb Up 35 seconds prometheus Up 32 seconds NAMES STATUS policy-apex-pdp Up 35 seconds policy-pap Up 38 seconds policy-api Up 38 seconds kafka Up 42 seconds grafana Up 36 seconds zookeeper Up 42 seconds simulator Up 41 seconds mariadb Up 40 seconds prometheus Up 37 seconds Build docker image for robot framework Error: No such image: policy-csit-robot Cloning into '/w/workspace/policy-pap-newdelhi-project-csit-pap/csit/resources/tests/models'... Build robot framework docker image Sending build context to Docker daemon 16.14MB Step 1/9 : FROM nexus3.onap.org:10001/library/python:3.10-slim-bullseye 3.10-slim-bullseye: Pulling from library/python 6533c3eba3f3: Pulling fs layer 4c78776b0d0f: Pulling fs layer 835d362d152c: Pulling fs layer 47832cdcfe85: Pulling fs layer c9a64fd2c4cb: Pulling fs layer 47832cdcfe85: Waiting c9a64fd2c4cb: Waiting 4c78776b0d0f: Download complete 47832cdcfe85: Verifying Checksum 47832cdcfe85: Download complete c9a64fd2c4cb: Verifying Checksum c9a64fd2c4cb: Download complete 835d362d152c: Download complete 6533c3eba3f3: Download complete 6533c3eba3f3: Pull complete 4c78776b0d0f: Pull complete 835d362d152c: Pull complete 47832cdcfe85: Pull complete c9a64fd2c4cb: Pull complete Digest: sha256:8e53874607bf1b7e97ad9fab4ee1bc3b237731e45b481e037dd2a30f603b0ac7 Status: Downloaded newer image for nexus3.onap.org:10001/library/python:3.10-slim-bullseye ---> 43bcca73fe15 Step 2/9 : ARG CSIT_SCRIPT=${CSIT_SCRIPT} ---> Running in b391ac21bf2c Removing intermediate container b391ac21bf2c ---> 2032c2204eeb Step 3/9 : ARG ROBOT_FILE=${ROBOT_FILE} ---> Running in b2a41e3fb1c2 Removing intermediate container b2a41e3fb1c2 ---> 7a1de225dcec Step 4/9 : ENV ROBOT_WORKSPACE=/opt/robotworkspace ROBOT_FILE=$ROBOT_FILE CLAMP_K8S_TEST=$CLAMP_K8S_TEST ---> Running in 44a4fcaafebc Removing intermediate container 44a4fcaafebc ---> 22910331b9f9 Step 5/9 : RUN python3 -m pip -qq install --upgrade pip && python3 -m pip -qq install --upgrade --extra-index-url="https://nexus3.onap.org/repository/PyPi.staging/simple" 'robotframework-onap==0.6.0.*' --pre && python3 -m pip -qq install --upgrade confluent-kafka && python3 -m pip freeze ---> Running in 3f301b0fdecb bcrypt==4.2.0 certifi==2024.8.30 cffi==1.17.1 charset-normalizer==3.3.2 confluent-kafka==2.5.3 cryptography==43.0.1 decorator==5.1.1 deepdiff==8.0.1 dnspython==2.6.1 future==1.0.0 idna==3.8 Jinja2==3.1.4 jsonpath-rw==1.4.0 kafka-python==2.0.2 MarkupSafe==2.1.5 more-itertools==5.0.0 orderly-set==5.2.2 paramiko==3.4.1 pbr==6.1.0 ply==3.11 protobuf==5.28.0 pycparser==2.22 PyNaCl==1.5.0 PyYAML==6.0.2 requests==2.32.3 robotframework==7.1rc2 robotframework-onap==0.6.0.dev105 robotframework-requests==1.0a11 robotlibcore-temp==1.0.2 six==1.16.0 urllib3==2.2.2 Removing intermediate container 3f301b0fdecb ---> 1da4dedbc6b5 Step 6/9 : RUN mkdir -p ${ROBOT_WORKSPACE} ---> Running in 7059fcbc1147 Removing intermediate container 7059fcbc1147 ---> c5c8d6781dda Step 7/9 : COPY scripts/run-test.sh tests/ ${ROBOT_WORKSPACE}/ ---> ce1a90da17cb Step 8/9 : WORKDIR ${ROBOT_WORKSPACE} ---> Running in 3a9c6911f40b Removing intermediate container 3a9c6911f40b ---> f09a567bd553 Step 9/9 : CMD ["sh", "-c", "./run-test.sh" ] ---> Running in 8cddadab84b4 Removing intermediate container 8cddadab84b4 ---> 973136343aa8 Successfully built 973136343aa8 Successfully tagged policy-csit-robot:latest top - 17:03:14 up 3 min, 0 users, load average: 2.95, 1.58, 0.63 Tasks: 211 total, 1 running, 131 sleeping, 0 stopped, 0 zombie %Cpu(s): 14.2 us, 3.2 sy, 0.0 ni, 76.6 id, 5.8 wa, 0.0 hi, 0.1 si, 0.1 st total used free shared buff/cache available Mem: 31G 2.8G 22G 1.3M 6.2G 28G Swap: 1.0G 0B 1.0G NAMES STATUS policy-apex-pdp Up About a minute policy-pap Up About a minute policy-api Up About a minute kafka Up About a minute grafana Up About a minute zookeeper Up About a minute simulator Up About a minute mariadb Up About a minute prometheus Up About a minute CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 4c626baa1f97 policy-apex-pdp 1.17% 176MiB / 31.41GiB 0.55% 28.1kB / 42.4kB 0B / 0B 50 c62594c01aa9 policy-pap 1.06% 549.6MiB / 31.41GiB 1.71% 113kB / 134kB 0B / 149MB 64 8223c204e391 policy-api 0.09% 484.7MiB / 31.41GiB 1.51% 989kB / 674kB 0B / 0B 53 f95b4d693abc kafka 2.33% 423.7MiB / 31.41GiB 1.32% 133kB / 130kB 0B / 532kB 87 04354d7d10b8 grafana 0.04% 66.39MiB / 31.41GiB 0.21% 24.3kB / 4.65kB 0B / 27.2MB 22 3be99c5a55f5 zookeeper 0.08% 87.93MiB / 31.41GiB 0.27% 55.5kB / 49.7kB 229kB / 381kB 63 a169c0d4f297 simulator 0.06% 122.5MiB / 31.41GiB 0.38% 1.43kB / 0B 0B / 0B 77 aa7f15a69e8d mariadb 0.03% 102.6MiB / 31.41GiB 0.32% 969kB / 1.22MB 11MB / 71.6MB 30 eca61f6d3444 prometheus 0.00% 18.81MiB / 31.41GiB 0.06% 67.7kB / 2.99kB 0B / 0B 11 Container policy-csit Creating Container policy-csit Created Attaching to policy-csit policy-csit | Invoking the robot tests from: pap-test.robot pap-slas.robot policy-csit | Run Robot test policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates policy-csit | -v POLICY_API_IP:policy-api:6969 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 policy-csit | -v APEX_IP:policy-apex-pdp:6969 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 policy-csit | -v KAFKA_IP:kafka:9092 policy-csit | -v PROMETHEUS_IP:prometheus:9090 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 policy-csit | -v TEMP_FOLDER:/tmp/distribution policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 policy-csit | -v CLAMP_K8S_TEST: policy-csit | Starting Robot test suites ... policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas.Pap-Test policy-csit | ============================================================================== policy-csit | LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | LoadNodeTemplates :: Create node templates in database using speci... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Healthcheck :: Verify policy pap health check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Consolidated Healthcheck :: Verify policy consolidated health check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Metrics :: Verify policy pap is exporting prometheus metrics | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | DeployPdpGroups :: Deploy policies in PdpGroups | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Pap-Test & Pap-Slas.Pap-Test | PASS | policy-csit | 22 tests, 22 passed, 0 failed policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas.Pap-Slas policy-csit | ============================================================================== policy-csit | WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Pap-Test & Pap-Slas.Pap-Slas | PASS | policy-csit | 8 tests, 8 passed, 0 failed policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas | PASS | policy-csit | 30 tests, 30 passed, 0 failed policy-csit | ============================================================================== policy-csit | Output: /tmp/results/output.xml policy-csit | Log: /tmp/results/log.html policy-csit | Report: /tmp/results/report.html policy-csit | RESULT: 0 policy-csit exited with code 0 NAMES STATUS policy-apex-pdp Up 2 minutes policy-pap Up 2 minutes policy-api Up 2 minutes kafka Up 2 minutes grafana Up 2 minutes zookeeper Up 2 minutes simulator Up 2 minutes mariadb Up 2 minutes prometheus Up 2 minutes Shut down started! Collecting logs from docker compose containers... ======== Logs from grafana ======== grafana | logger=settings t=2024-09-09T17:02:08.460147558Z level=info msg="Starting Grafana" version=11.2.0 commit=2a88694fd3ced0335bf3726cc5d0adc2d1858855 branch=v11.2.x compiled=2024-09-09T17:02:08Z grafana | logger=settings t=2024-09-09T17:02:08.460756819Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini grafana | logger=settings t=2024-09-09T17:02:08.460777999Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini grafana | logger=settings t=2024-09-09T17:02:08.46078485Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" grafana | logger=settings t=2024-09-09T17:02:08.46079037Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" grafana | logger=settings t=2024-09-09T17:02:08.46079592Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" grafana | logger=settings t=2024-09-09T17:02:08.46080171Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" grafana | logger=settings t=2024-09-09T17:02:08.46080831Z level=info msg="Config overridden from command line" arg="default.log.mode=console" grafana | logger=settings t=2024-09-09T17:02:08.46081817Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" grafana | logger=settings t=2024-09-09T17:02:08.46082455Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" grafana | logger=settings t=2024-09-09T17:02:08.46082991Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" grafana | logger=settings t=2024-09-09T17:02:08.460837281Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" grafana | logger=settings t=2024-09-09T17:02:08.460842351Z level=info msg=Target target=[all] grafana | logger=settings t=2024-09-09T17:02:08.460922872Z level=info msg="Path Home" path=/usr/share/grafana grafana | logger=settings t=2024-09-09T17:02:08.461001093Z level=info msg="Path Data" path=/var/lib/grafana grafana | logger=settings t=2024-09-09T17:02:08.461100245Z level=info msg="Path Logs" path=/var/log/grafana grafana | logger=settings t=2024-09-09T17:02:08.461109755Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins grafana | logger=settings t=2024-09-09T17:02:08.461115295Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning grafana | logger=settings t=2024-09-09T17:02:08.461120546Z level=info msg="App mode production" grafana | logger=featuremgmt t=2024-09-09T17:02:08.461617674Z level=info msg=FeatureToggles tlsMemcached=true recoveryThreshold=true autoMigrateXYChartPanel=true dashgpt=true prometheusConfigOverhaulAuth=true cloudWatchCrossAccountQuerying=true panelMonitoring=true lokiStructuredMetadata=true alertingNoDataErrorExecution=true transformationsVariableSupport=true lokiQueryHints=true publicDashboards=true prometheusMetricEncyclopedia=true angularDeprecationUI=true addFieldFromCalculationStatFunctions=true groupToNestedTableTransformation=true logRowsPopoverMenu=true cloudWatchRoundUpEndTime=true nestedFolders=true dataplaneFrontendFallback=true logsInfiniteScrolling=true topnav=true lokiQuerySplitting=true prometheusDataplane=true formatString=true alertingInsights=true transformationsRedesign=true recordedQueriesMulti=true cloudWatchNewLabelParsing=true kubernetesPlaylists=true correlations=true ssoSettingsApi=true alertingSimplifiedRouting=true logsExploreTableVisualisation=true lokiMetricDataplane=true prometheusAzureOverrideAudience=true annotationPermissionUpdate=true awsAsyncQueryCaching=true managedPluginsInstall=true exploreMetrics=true logsContextDatasourceUi=true influxdbBackendMigration=true grafana | logger=sqlstore t=2024-09-09T17:02:08.461816238Z level=info msg="Connecting to DB" dbtype=sqlite3 grafana | logger=sqlstore t=2024-09-09T17:02:08.461853258Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db grafana | logger=migrator t=2024-09-09T17:02:08.4641298Z level=info msg="Locking database" grafana | logger=migrator t=2024-09-09T17:02:08.46414429Z level=info msg="Starting DB migrations" grafana | logger=migrator t=2024-09-09T17:02:08.465025956Z level=info msg="Executing migration" id="create migration_log table" grafana | logger=migrator t=2024-09-09T17:02:08.466296029Z level=info msg="Migration successfully executed" id="create migration_log table" duration=1.269193ms grafana | logger=migrator t=2024-09-09T17:02:08.471048835Z level=info msg="Executing migration" id="create user table" grafana | logger=migrator t=2024-09-09T17:02:08.471701437Z level=info msg="Migration successfully executed" id="create user table" duration=652.152µs grafana | logger=migrator t=2024-09-09T17:02:08.474850134Z level=info msg="Executing migration" id="add unique index user.login" grafana | logger=migrator t=2024-09-09T17:02:08.475536996Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=687.062µs grafana | logger=migrator t=2024-09-09T17:02:08.478613732Z level=info msg="Executing migration" id="add unique index user.email" grafana | logger=migrator t=2024-09-09T17:02:08.479876565Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=1.262654ms grafana | logger=migrator t=2024-09-09T17:02:08.487245598Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" grafana | logger=migrator t=2024-09-09T17:02:08.48845516Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=1.215542ms grafana | logger=migrator t=2024-09-09T17:02:08.512492724Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" grafana | logger=migrator t=2024-09-09T17:02:08.513599945Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=1.111311ms grafana | logger=migrator t=2024-09-09T17:02:08.517822301Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" grafana | logger=migrator t=2024-09-09T17:02:08.522372433Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=4.545343ms grafana | logger=migrator t=2024-09-09T17:02:08.58965816Z level=info msg="Executing migration" id="create user table v2" grafana | logger=migrator t=2024-09-09T17:02:08.591464302Z level=info msg="Migration successfully executed" id="create user table v2" duration=1.812162ms grafana | logger=migrator t=2024-09-09T17:02:08.596757918Z level=info msg="Executing migration" id="create index UQE_user_login - v2" grafana | logger=migrator t=2024-09-09T17:02:08.597828718Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=1.07026ms grafana | logger=migrator t=2024-09-09T17:02:08.600642309Z level=info msg="Executing migration" id="create index UQE_user_email - v2" grafana | logger=migrator t=2024-09-09T17:02:08.601899711Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=1.257002ms grafana | logger=migrator t=2024-09-09T17:02:08.605733471Z level=info msg="Executing migration" id="copy data_source v1 to v2" grafana | logger=migrator t=2024-09-09T17:02:08.606198499Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=465.578µs grafana | logger=migrator t=2024-09-09T17:02:08.6107083Z level=info msg="Executing migration" id="Drop old table user_v1" grafana | logger=migrator t=2024-09-09T17:02:08.611338862Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=629.882µs grafana | logger=migrator t=2024-09-09T17:02:08.619726284Z level=info msg="Executing migration" id="Add column help_flags1 to user table" grafana | logger=migrator t=2024-09-09T17:02:08.621025057Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.297853ms grafana | logger=migrator t=2024-09-09T17:02:08.62511419Z level=info msg="Executing migration" id="Update user table charset" grafana | logger=migrator t=2024-09-09T17:02:08.625264134Z level=info msg="Migration successfully executed" id="Update user table charset" duration=149.904µs grafana | logger=migrator t=2024-09-09T17:02:08.630220633Z level=info msg="Executing migration" id="Add last_seen_at column to user" grafana | logger=migrator t=2024-09-09T17:02:08.635352866Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=5.133683ms grafana | logger=migrator t=2024-09-09T17:02:08.639348249Z level=info msg="Executing migration" id="Add missing user data" grafana | logger=migrator t=2024-09-09T17:02:08.639730516Z level=info msg="Migration successfully executed" id="Add missing user data" duration=383.067µs grafana | logger=migrator t=2024-09-09T17:02:08.644246947Z level=info msg="Executing migration" id="Add is_disabled column to user" grafana | logger=migrator t=2024-09-09T17:02:08.64553325Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.285823ms grafana | logger=migrator t=2024-09-09T17:02:08.650029261Z level=info msg="Executing migration" id="Add index user.login/user.email" grafana | logger=migrator t=2024-09-09T17:02:08.651070221Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=1.042629ms grafana | logger=migrator t=2024-09-09T17:02:08.655126393Z level=info msg="Executing migration" id="Add is_service_account column to user" grafana | logger=migrator t=2024-09-09T17:02:08.656613231Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.485978ms grafana | logger=migrator t=2024-09-09T17:02:08.663038917Z level=info msg="Executing migration" id="Update is_service_account column to nullable" grafana | logger=migrator t=2024-09-09T17:02:08.672852834Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=9.813927ms grafana | logger=migrator t=2024-09-09T17:02:08.675894919Z level=info msg="Executing migration" id="Add uid column to user" grafana | logger=migrator t=2024-09-09T17:02:08.676977229Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=1.0822ms grafana | logger=migrator t=2024-09-09T17:02:08.679919872Z level=info msg="Executing migration" id="Update uid column values for users" grafana | logger=migrator t=2024-09-09T17:02:08.680169916Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=250.964µs grafana | logger=migrator t=2024-09-09T17:02:08.684511525Z level=info msg="Executing migration" id="Add unique index user_uid" grafana | logger=migrator t=2024-09-09T17:02:08.685511743Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=1.000158ms grafana | logger=migrator t=2024-09-09T17:02:08.725043128Z level=info msg="Executing migration" id="update login field with orgid to allow for multiple service accounts with same name across orgs" grafana | logger=migrator t=2024-09-09T17:02:08.72577101Z level=info msg="Migration successfully executed" id="update login field with orgid to allow for multiple service accounts with same name across orgs" duration=756.563µs grafana | logger=migrator t=2024-09-09T17:02:08.732486673Z level=info msg="Executing migration" id="update login and email fields to lowercase" grafana | logger=migrator t=2024-09-09T17:02:08.733195375Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase" duration=713.603µs grafana | logger=migrator t=2024-09-09T17:02:08.739860515Z level=info msg="Executing migration" id="update login and email fields to lowercase2" grafana | logger=migrator t=2024-09-09T17:02:08.740397796Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase2" duration=540.591µs grafana | logger=migrator t=2024-09-09T17:02:08.743804807Z level=info msg="Executing migration" id="create temp user table v1-7" grafana | logger=migrator t=2024-09-09T17:02:08.744584301Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=780.684µs grafana | logger=migrator t=2024-09-09T17:02:08.748349189Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" grafana | logger=migrator t=2024-09-09T17:02:08.74892319Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=573.871µs grafana | logger=migrator t=2024-09-09T17:02:08.752000545Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" grafana | logger=migrator t=2024-09-09T17:02:08.753157206Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=1.157411ms grafana | logger=migrator t=2024-09-09T17:02:08.759650574Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" grafana | logger=migrator t=2024-09-09T17:02:08.760807645Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=1.156681ms grafana | logger=migrator t=2024-09-09T17:02:08.764150435Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" grafana | logger=migrator t=2024-09-09T17:02:08.765301776Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=1.150811ms grafana | logger=migrator t=2024-09-09T17:02:08.768593456Z level=info msg="Executing migration" id="Update temp_user table charset" grafana | logger=migrator t=2024-09-09T17:02:08.768622956Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=27.5µs grafana | logger=migrator t=2024-09-09T17:02:08.775078552Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" grafana | logger=migrator t=2024-09-09T17:02:08.776173882Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=1.09478ms grafana | logger=migrator t=2024-09-09T17:02:08.780696144Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" grafana | logger=migrator t=2024-09-09T17:02:08.781768853Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=1.072579ms grafana | logger=migrator t=2024-09-09T17:02:08.78600361Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" grafana | logger=migrator t=2024-09-09T17:02:08.786783034Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=779.424µs grafana | logger=migrator t=2024-09-09T17:02:08.791775444Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" grafana | logger=migrator t=2024-09-09T17:02:08.792527538Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=750.864µs grafana | logger=migrator t=2024-09-09T17:02:08.798664578Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" grafana | logger=migrator t=2024-09-09T17:02:08.803536097Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=4.872659ms grafana | logger=migrator t=2024-09-09T17:02:08.807068171Z level=info msg="Executing migration" id="create temp_user v2" grafana | logger=migrator t=2024-09-09T17:02:08.808018018Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=949.437µs grafana | logger=migrator t=2024-09-09T17:02:08.813251963Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" grafana | logger=migrator t=2024-09-09T17:02:08.814086148Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=835.925µs grafana | logger=migrator t=2024-09-09T17:02:08.818020919Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" grafana | logger=migrator t=2024-09-09T17:02:08.818860185Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=839.186µs grafana | logger=migrator t=2024-09-09T17:02:08.822119783Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" grafana | logger=migrator t=2024-09-09T17:02:08.822969579Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=849.525µs grafana | logger=migrator t=2024-09-09T17:02:08.828289095Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" grafana | logger=migrator t=2024-09-09T17:02:08.82916653Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=880.586µs grafana | logger=migrator t=2024-09-09T17:02:08.833192504Z level=info msg="Executing migration" id="copy temp_user v1 to v2" grafana | logger=migrator t=2024-09-09T17:02:08.833890526Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=697.352µs grafana | logger=migrator t=2024-09-09T17:02:08.838986478Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" grafana | logger=migrator t=2024-09-09T17:02:08.839776362Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=794.274µs grafana | logger=migrator t=2024-09-09T17:02:08.843867127Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" grafana | logger=migrator t=2024-09-09T17:02:08.844265274Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=407.057µs grafana | logger=migrator t=2024-09-09T17:02:08.849911026Z level=info msg="Executing migration" id="create star table" grafana | logger=migrator t=2024-09-09T17:02:08.850630958Z level=info msg="Migration successfully executed" id="create star table" duration=719.862µs grafana | logger=migrator t=2024-09-09T17:02:08.853650593Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" grafana | logger=migrator t=2024-09-09T17:02:08.854421867Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=771.014µs grafana | logger=migrator t=2024-09-09T17:02:08.858281457Z level=info msg="Executing migration" id="create org table v1" grafana | logger=migrator t=2024-09-09T17:02:08.859505369Z level=info msg="Migration successfully executed" id="create org table v1" duration=1.223322ms grafana | logger=migrator t=2024-09-09T17:02:08.867785839Z level=info msg="Executing migration" id="create index UQE_org_name - v1" grafana | logger=migrator t=2024-09-09T17:02:08.869264016Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=1.477417ms grafana | logger=migrator t=2024-09-09T17:02:08.875932086Z level=info msg="Executing migration" id="create org_user table v1" grafana | logger=migrator t=2024-09-09T17:02:08.876999985Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=1.067539ms grafana | logger=migrator t=2024-09-09T17:02:08.881734621Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" grafana | logger=migrator t=2024-09-09T17:02:08.882957274Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=1.221783ms grafana | logger=migrator t=2024-09-09T17:02:08.888815569Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" grafana | logger=migrator t=2024-09-09T17:02:08.88995519Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=1.139431ms grafana | logger=migrator t=2024-09-09T17:02:08.895728914Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" grafana | logger=migrator t=2024-09-09T17:02:08.896469257Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=740.073µs grafana | logger=migrator t=2024-09-09T17:02:08.904138136Z level=info msg="Executing migration" id="Update org table charset" grafana | logger=migrator t=2024-09-09T17:02:08.904181567Z level=info msg="Migration successfully executed" id="Update org table charset" duration=44.871µs grafana | logger=migrator t=2024-09-09T17:02:08.912781722Z level=info msg="Executing migration" id="Update org_user table charset" grafana | logger=migrator t=2024-09-09T17:02:08.912846133Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=69.601µs grafana | logger=migrator t=2024-09-09T17:02:08.91703516Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" grafana | logger=migrator t=2024-09-09T17:02:08.917378456Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=343.697µs grafana | logger=migrator t=2024-09-09T17:02:08.930977442Z level=info msg="Executing migration" id="create dashboard table" grafana | logger=migrator t=2024-09-09T17:02:08.931887718Z level=info msg="Migration successfully executed" id="create dashboard table" duration=910.836µs grafana | logger=migrator t=2024-09-09T17:02:08.93754632Z level=info msg="Executing migration" id="add index dashboard.account_id" grafana | logger=migrator t=2024-09-09T17:02:08.938483967Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=937.207µs grafana | logger=migrator t=2024-09-09T17:02:08.945067586Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" grafana | logger=migrator t=2024-09-09T17:02:08.9458531Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=785.084µs grafana | logger=migrator t=2024-09-09T17:02:08.953187153Z level=info msg="Executing migration" id="create dashboard_tag table" grafana | logger=migrator t=2024-09-09T17:02:08.954003518Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=815.995µs grafana | logger=migrator t=2024-09-09T17:02:08.960830851Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" grafana | logger=migrator t=2024-09-09T17:02:08.962018473Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=1.188272ms grafana | logger=migrator t=2024-09-09T17:02:08.967166406Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" grafana | logger=migrator t=2024-09-09T17:02:08.967793977Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=627.911µs grafana | logger=migrator t=2024-09-09T17:02:08.97344194Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" grafana | logger=migrator t=2024-09-09T17:02:08.980752041Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=7.302092ms grafana | logger=migrator t=2024-09-09T17:02:08.987725758Z level=info msg="Executing migration" id="create dashboard v2" grafana | logger=migrator t=2024-09-09T17:02:08.988734686Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=1.009578ms grafana | logger=migrator t=2024-09-09T17:02:08.998604764Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" grafana | logger=migrator t=2024-09-09T17:02:09.000108221Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=1.505957ms grafana | logger=migrator t=2024-09-09T17:02:09.003369831Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" grafana | logger=migrator t=2024-09-09T17:02:09.004398309Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=1.028058ms grafana | logger=migrator t=2024-09-09T17:02:09.012764779Z level=info msg="Executing migration" id="copy dashboard v1 to v2" grafana | logger=migrator t=2024-09-09T17:02:09.013075784Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=312.345µs grafana | logger=migrator t=2024-09-09T17:02:09.049081289Z level=info msg="Executing migration" id="drop table dashboard_v1" grafana | logger=migrator t=2024-09-09T17:02:09.050052366Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=975.177µs grafana | logger=migrator t=2024-09-09T17:02:09.05750492Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" grafana | logger=migrator t=2024-09-09T17:02:09.057624342Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=122.013µs grafana | logger=migrator t=2024-09-09T17:02:09.06315067Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" grafana | logger=migrator t=2024-09-09T17:02:09.065126516Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=1.975626ms grafana | logger=migrator t=2024-09-09T17:02:09.073664819Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" grafana | logger=migrator t=2024-09-09T17:02:09.076751454Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=3.085315ms grafana | logger=migrator t=2024-09-09T17:02:09.082243022Z level=info msg="Executing migration" id="Add column gnetId in dashboard" grafana | logger=migrator t=2024-09-09T17:02:09.084269148Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=2.026085ms grafana | logger=migrator t=2024-09-09T17:02:09.09222297Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" grafana | logger=migrator t=2024-09-09T17:02:09.093089526Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=867.056µs grafana | logger=migrator t=2024-09-09T17:02:09.105199392Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" grafana | logger=migrator t=2024-09-09T17:02:09.108219646Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=3.018894ms grafana | logger=migrator t=2024-09-09T17:02:09.113095394Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" grafana | logger=migrator t=2024-09-09T17:02:09.113944649Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=849.155µs grafana | logger=migrator t=2024-09-09T17:02:09.119976006Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" grafana | logger=migrator t=2024-09-09T17:02:09.120835961Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=864.995µs grafana | logger=migrator t=2024-09-09T17:02:09.128749943Z level=info msg="Executing migration" id="Update dashboard table charset" grafana | logger=migrator t=2024-09-09T17:02:09.128793354Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=44.181µs grafana | logger=migrator t=2024-09-09T17:02:09.137646122Z level=info msg="Executing migration" id="Update dashboard_tag table charset" grafana | logger=migrator t=2024-09-09T17:02:09.137688643Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=43.621µs grafana | logger=migrator t=2024-09-09T17:02:09.145380531Z level=info msg="Executing migration" id="Add column folder_id in dashboard" grafana | logger=migrator t=2024-09-09T17:02:09.148622749Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=3.242768ms grafana | logger=migrator t=2024-09-09T17:02:09.158338483Z level=info msg="Executing migration" id="Add column isFolder in dashboard" grafana | logger=migrator t=2024-09-09T17:02:09.160871568Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=2.533455ms grafana | logger=migrator t=2024-09-09T17:02:09.166983427Z level=info msg="Executing migration" id="Add column has_acl in dashboard" grafana | logger=migrator t=2024-09-09T17:02:09.168998344Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=2.014747ms grafana | logger=migrator t=2024-09-09T17:02:09.180375717Z level=info msg="Executing migration" id="Add column uid in dashboard" grafana | logger=migrator t=2024-09-09T17:02:09.183561774Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=3.184717ms grafana | logger=migrator t=2024-09-09T17:02:09.192042656Z level=info msg="Executing migration" id="Update uid column values in dashboard" grafana | logger=migrator t=2024-09-09T17:02:09.192376721Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=331.095µs grafana | logger=migrator t=2024-09-09T17:02:09.19902714Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" grafana | logger=migrator t=2024-09-09T17:02:09.199738853Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=712.673µs grafana | logger=migrator t=2024-09-09T17:02:09.204800153Z level=info msg="Executing migration" id="Remove unique index org_id_slug" grafana | logger=migrator t=2024-09-09T17:02:09.205664649Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=866.096µs grafana | logger=migrator t=2024-09-09T17:02:09.209462437Z level=info msg="Executing migration" id="Update dashboard title length" grafana | logger=migrator t=2024-09-09T17:02:09.209491187Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=27.17µs grafana | logger=migrator t=2024-09-09T17:02:09.214005188Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" grafana | logger=migrator t=2024-09-09T17:02:09.215448954Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=1.443316ms grafana | logger=migrator t=2024-09-09T17:02:09.220715568Z level=info msg="Executing migration" id="create dashboard_provisioning" grafana | logger=migrator t=2024-09-09T17:02:09.221558463Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=842.835µs grafana | logger=migrator t=2024-09-09T17:02:09.22644212Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" grafana | logger=migrator t=2024-09-09T17:02:09.232523989Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=6.076679ms grafana | logger=migrator t=2024-09-09T17:02:09.241673353Z level=info msg="Executing migration" id="create dashboard_provisioning v2" grafana | logger=migrator t=2024-09-09T17:02:09.242479957Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=806.964µs grafana | logger=migrator t=2024-09-09T17:02:09.249796878Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" grafana | logger=migrator t=2024-09-09T17:02:09.250629643Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=832.645µs grafana | logger=migrator t=2024-09-09T17:02:09.255352327Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" grafana | logger=migrator t=2024-09-09T17:02:09.256128772Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=775.875µs grafana | logger=migrator t=2024-09-09T17:02:09.260085622Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" grafana | logger=migrator t=2024-09-09T17:02:09.260399747Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=314.035µs grafana | logger=migrator t=2024-09-09T17:02:09.27058531Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" grafana | logger=migrator t=2024-09-09T17:02:09.27111991Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=534.16µs grafana | logger=migrator t=2024-09-09T17:02:09.280263243Z level=info msg="Executing migration" id="Add check_sum column" grafana | logger=migrator t=2024-09-09T17:02:09.28230426Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=2.040417ms grafana | logger=migrator t=2024-09-09T17:02:09.287102396Z level=info msg="Executing migration" id="Add index for dashboard_title" grafana | logger=migrator t=2024-09-09T17:02:09.287950601Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=849.675µs grafana | logger=migrator t=2024-09-09T17:02:09.294447877Z level=info msg="Executing migration" id="delete tags for deleted dashboards" grafana | logger=migrator t=2024-09-09T17:02:09.29464867Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=201.324µs grafana | logger=migrator t=2024-09-09T17:02:09.299264683Z level=info msg="Executing migration" id="delete stars for deleted dashboards" grafana | logger=migrator t=2024-09-09T17:02:09.299456236Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=191.553µs grafana | logger=migrator t=2024-09-09T17:02:09.305338251Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" grafana | logger=migrator t=2024-09-09T17:02:09.306601465Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=1.263173ms grafana | logger=migrator t=2024-09-09T17:02:09.311674225Z level=info msg="Executing migration" id="Add isPublic for dashboard" grafana | logger=migrator t=2024-09-09T17:02:09.313874424Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.200219ms grafana | logger=migrator t=2024-09-09T17:02:09.320502703Z level=info msg="Executing migration" id="Add deleted for dashboard" grafana | logger=migrator t=2024-09-09T17:02:09.322665211Z level=info msg="Migration successfully executed" id="Add deleted for dashboard" duration=2.162038ms grafana | logger=migrator t=2024-09-09T17:02:09.357577676Z level=info msg="Executing migration" id="Add index for deleted" grafana | logger=migrator t=2024-09-09T17:02:09.359214985Z level=info msg="Migration successfully executed" id="Add index for deleted" duration=1.642069ms grafana | logger=migrator t=2024-09-09T17:02:09.365029689Z level=info msg="Executing migration" id="create data_source table" grafana | logger=migrator t=2024-09-09T17:02:09.366040197Z level=info msg="Migration successfully executed" id="create data_source table" duration=1.010288ms grafana | logger=migrator t=2024-09-09T17:02:09.36900882Z level=info msg="Executing migration" id="add index data_source.account_id" grafana | logger=migrator t=2024-09-09T17:02:09.369850965Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=841.605µs grafana | logger=migrator t=2024-09-09T17:02:09.378264255Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" grafana | logger=migrator t=2024-09-09T17:02:09.379143062Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=878.437µs grafana | logger=migrator t=2024-09-09T17:02:09.39133416Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" grafana | logger=migrator t=2024-09-09T17:02:09.392610342Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=1.277342ms grafana | logger=migrator t=2024-09-09T17:02:09.398409567Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" grafana | logger=migrator t=2024-09-09T17:02:09.399632298Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=1.223051ms grafana | logger=migrator t=2024-09-09T17:02:09.406771646Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" grafana | logger=migrator t=2024-09-09T17:02:09.41706686Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=10.294784ms grafana | logger=migrator t=2024-09-09T17:02:09.421055352Z level=info msg="Executing migration" id="create data_source table v2" grafana | logger=migrator t=2024-09-09T17:02:09.421655042Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=602.91µs grafana | logger=migrator t=2024-09-09T17:02:09.426952027Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" grafana | logger=migrator t=2024-09-09T17:02:09.428153038Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=1.199201ms grafana | logger=migrator t=2024-09-09T17:02:09.436151831Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" grafana | logger=migrator t=2024-09-09T17:02:09.437405503Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=1.252802ms grafana | logger=migrator t=2024-09-09T17:02:09.445421327Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" grafana | logger=migrator t=2024-09-09T17:02:09.446340164Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=917.787µs grafana | logger=migrator t=2024-09-09T17:02:09.452471003Z level=info msg="Executing migration" id="Add column with_credentials" grafana | logger=migrator t=2024-09-09T17:02:09.456892372Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=4.421019ms grafana | logger=migrator t=2024-09-09T17:02:09.461094747Z level=info msg="Executing migration" id="Add secure json data column" grafana | logger=migrator t=2024-09-09T17:02:09.463441059Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.345822ms grafana | logger=migrator t=2024-09-09T17:02:09.470685208Z level=info msg="Executing migration" id="Update data_source table charset" grafana | logger=migrator t=2024-09-09T17:02:09.470712129Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=27.931µs grafana | logger=migrator t=2024-09-09T17:02:09.481354779Z level=info msg="Executing migration" id="Update initial version to 1" grafana | logger=migrator t=2024-09-09T17:02:09.481556613Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=201.864µs grafana | logger=migrator t=2024-09-09T17:02:09.489428034Z level=info msg="Executing migration" id="Add read_only data column" grafana | logger=migrator t=2024-09-09T17:02:09.49200263Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=2.567246ms grafana | logger=migrator t=2024-09-09T17:02:09.494910672Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" grafana | logger=migrator t=2024-09-09T17:02:09.495107395Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=198.703µs grafana | logger=migrator t=2024-09-09T17:02:09.500248327Z level=info msg="Executing migration" id="Update json_data with nulls" grafana | logger=migrator t=2024-09-09T17:02:09.500466151Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=218.054µs grafana | logger=migrator t=2024-09-09T17:02:09.504004144Z level=info msg="Executing migration" id="Add uid column" grafana | logger=migrator t=2024-09-09T17:02:09.506476808Z level=info msg="Migration successfully executed" id="Add uid column" duration=2.472604ms grafana | logger=migrator t=2024-09-09T17:02:09.511799294Z level=info msg="Executing migration" id="Update uid value" grafana | logger=migrator t=2024-09-09T17:02:09.511976277Z level=info msg="Migration successfully executed" id="Update uid value" duration=177.453µs grafana | logger=migrator t=2024-09-09T17:02:09.520007371Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" grafana | logger=migrator t=2024-09-09T17:02:09.521515187Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=1.506486ms grafana | logger=migrator t=2024-09-09T17:02:09.528427972Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" grafana | logger=migrator t=2024-09-09T17:02:09.529579252Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=1.15454ms grafana | logger=migrator t=2024-09-09T17:02:09.535666771Z level=info msg="Executing migration" id="Add is_prunable column" grafana | logger=migrator t=2024-09-09T17:02:09.538619794Z level=info msg="Migration successfully executed" id="Add is_prunable column" duration=2.956933ms grafana | logger=migrator t=2024-09-09T17:02:09.547166147Z level=info msg="Executing migration" id="Add api_version column" grafana | logger=migrator t=2024-09-09T17:02:09.550882913Z level=info msg="Migration successfully executed" id="Add api_version column" duration=3.716956ms grafana | logger=migrator t=2024-09-09T17:02:09.555241411Z level=info msg="Executing migration" id="create api_key table" grafana | logger=migrator t=2024-09-09T17:02:09.556411232Z level=info msg="Migration successfully executed" id="create api_key table" duration=1.169661ms grafana | logger=migrator t=2024-09-09T17:02:09.562613713Z level=info msg="Executing migration" id="add index api_key.account_id" grafana | logger=migrator t=2024-09-09T17:02:09.563365466Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=751.303µs grafana | logger=migrator t=2024-09-09T17:02:09.567131114Z level=info msg="Executing migration" id="add index api_key.key" grafana | logger=migrator t=2024-09-09T17:02:09.568415186Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=1.292282ms grafana | logger=migrator t=2024-09-09T17:02:09.577876906Z level=info msg="Executing migration" id="add index api_key.account_id_name" grafana | logger=migrator t=2024-09-09T17:02:09.579114238Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=1.254002ms grafana | logger=migrator t=2024-09-09T17:02:09.589208219Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" grafana | logger=migrator t=2024-09-09T17:02:09.590387449Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=1.18239ms grafana | logger=migrator t=2024-09-09T17:02:09.594055865Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" grafana | logger=migrator t=2024-09-09T17:02:09.595566982Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=1.510887ms grafana | logger=migrator t=2024-09-09T17:02:09.599868399Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" grafana | logger=migrator t=2024-09-09T17:02:09.600601972Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=734.384µs grafana | logger=migrator t=2024-09-09T17:02:09.604511002Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" grafana | logger=migrator t=2024-09-09T17:02:09.611525058Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=7.013176ms grafana | logger=migrator t=2024-09-09T17:02:09.614629373Z level=info msg="Executing migration" id="create api_key table v2" grafana | logger=migrator t=2024-09-09T17:02:09.615307945Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=678.192µs grafana | logger=migrator t=2024-09-09T17:02:09.621964924Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" grafana | logger=migrator t=2024-09-09T17:02:09.623716715Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=1.755931ms grafana | logger=migrator t=2024-09-09T17:02:09.632199097Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" grafana | logger=migrator t=2024-09-09T17:02:09.63292034Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=721.213µs grafana | logger=migrator t=2024-09-09T17:02:09.639239384Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" grafana | logger=migrator t=2024-09-09T17:02:09.640428955Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=1.189091ms grafana | logger=migrator t=2024-09-09T17:02:09.643619022Z level=info msg="Executing migration" id="copy api_key v1 to v2" grafana | logger=migrator t=2024-09-09T17:02:09.644055299Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=437.197µs grafana | logger=migrator t=2024-09-09T17:02:09.646819979Z level=info msg="Executing migration" id="Drop old table api_key_v1" grafana | logger=migrator t=2024-09-09T17:02:09.647364468Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=544.539µs grafana | logger=migrator t=2024-09-09T17:02:09.687183451Z level=info msg="Executing migration" id="Update api_key table charset" grafana | logger=migrator t=2024-09-09T17:02:09.687216692Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=35.141µs grafana | logger=migrator t=2024-09-09T17:02:09.690417559Z level=info msg="Executing migration" id="Add expires to api_key table" grafana | logger=migrator t=2024-09-09T17:02:09.69441039Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=3.991881ms grafana | logger=migrator t=2024-09-09T17:02:09.697520705Z level=info msg="Executing migration" id="Add service account foreign key" grafana | logger=migrator t=2024-09-09T17:02:09.70003396Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.512815ms grafana | logger=migrator t=2024-09-09T17:02:09.704785906Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" grafana | logger=migrator t=2024-09-09T17:02:09.704941288Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=155.583µs grafana | logger=migrator t=2024-09-09T17:02:09.707139588Z level=info msg="Executing migration" id="Add last_used_at to api_key table" grafana | logger=migrator t=2024-09-09T17:02:09.709644473Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=2.504575ms grafana | logger=migrator t=2024-09-09T17:02:09.712626256Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" grafana | logger=migrator t=2024-09-09T17:02:09.715142381Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.515665ms grafana | logger=migrator t=2024-09-09T17:02:09.718351218Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" grafana | logger=migrator t=2024-09-09T17:02:09.719076791Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=725.393µs grafana | logger=migrator t=2024-09-09T17:02:09.723612962Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" grafana | logger=migrator t=2024-09-09T17:02:09.724118711Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=505.649µs grafana | logger=migrator t=2024-09-09T17:02:09.727174946Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" grafana | logger=migrator t=2024-09-09T17:02:09.72798895Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=813.784µs grafana | logger=migrator t=2024-09-09T17:02:09.733055441Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" grafana | logger=migrator t=2024-09-09T17:02:09.734255762Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=1.197861ms grafana | logger=migrator t=2024-09-09T17:02:09.737418369Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" grafana | logger=migrator t=2024-09-09T17:02:09.738666771Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=1.238352ms grafana | logger=migrator t=2024-09-09T17:02:09.741656055Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" grafana | logger=migrator t=2024-09-09T17:02:09.742949228Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=1.291713ms grafana | logger=migrator t=2024-09-09T17:02:09.748546728Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" grafana | logger=migrator t=2024-09-09T17:02:09.74862292Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=76.462µs grafana | logger=migrator t=2024-09-09T17:02:09.750655735Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" grafana | logger=migrator t=2024-09-09T17:02:09.750677576Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=22.451µs grafana | logger=migrator t=2024-09-09T17:02:09.753471016Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" grafana | logger=migrator t=2024-09-09T17:02:09.756146734Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=2.675058ms grafana | logger=migrator t=2024-09-09T17:02:09.759113697Z level=info msg="Executing migration" id="Add encrypted dashboard json column" grafana | logger=migrator t=2024-09-09T17:02:09.761785585Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.671868ms grafana | logger=migrator t=2024-09-09T17:02:09.766593361Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" grafana | logger=migrator t=2024-09-09T17:02:09.766656572Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=63.421µs grafana | logger=migrator t=2024-09-09T17:02:09.769487923Z level=info msg="Executing migration" id="create quota table v1" grafana | logger=migrator t=2024-09-09T17:02:09.770191005Z level=info msg="Migration successfully executed" id="create quota table v1" duration=702.362µs grafana | logger=migrator t=2024-09-09T17:02:09.772885143Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" grafana | logger=migrator t=2024-09-09T17:02:09.774096706Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=1.209472ms grafana | logger=migrator t=2024-09-09T17:02:09.779217856Z level=info msg="Executing migration" id="Update quota table charset" grafana | logger=migrator t=2024-09-09T17:02:09.779254657Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=37.751µs grafana | logger=migrator t=2024-09-09T17:02:09.782259772Z level=info msg="Executing migration" id="create plugin_setting table" grafana | logger=migrator t=2024-09-09T17:02:09.783017585Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=757.663µs grafana | logger=migrator t=2024-09-09T17:02:09.786040409Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" grafana | logger=migrator t=2024-09-09T17:02:09.786899804Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=858.475µs grafana | logger=migrator t=2024-09-09T17:02:09.791918844Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" grafana | logger=migrator t=2024-09-09T17:02:09.796420115Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=4.497811ms grafana | logger=migrator t=2024-09-09T17:02:09.799623662Z level=info msg="Executing migration" id="Update plugin_setting table charset" grafana | logger=migrator t=2024-09-09T17:02:09.799661463Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=39.661µs grafana | logger=migrator t=2024-09-09T17:02:09.802656976Z level=info msg="Executing migration" id="create session table" grafana | logger=migrator t=2024-09-09T17:02:09.803374729Z level=info msg="Migration successfully executed" id="create session table" duration=717.463µs grafana | logger=migrator t=2024-09-09T17:02:09.806331341Z level=info msg="Executing migration" id="Drop old table playlist table" grafana | logger=migrator t=2024-09-09T17:02:09.806407833Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=76.802µs grafana | logger=migrator t=2024-09-09T17:02:09.810986785Z level=info msg="Executing migration" id="Drop old table playlist_item table" grafana | logger=migrator t=2024-09-09T17:02:09.811107067Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=120.602µs grafana | logger=migrator t=2024-09-09T17:02:09.814106831Z level=info msg="Executing migration" id="create playlist table v2" grafana | logger=migrator t=2024-09-09T17:02:09.815176579Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=1.068718ms grafana | logger=migrator t=2024-09-09T17:02:09.819833583Z level=info msg="Executing migration" id="create playlist item table v2" grafana | logger=migrator t=2024-09-09T17:02:09.820538985Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=705.442µs grafana | logger=migrator t=2024-09-09T17:02:09.825370782Z level=info msg="Executing migration" id="Update playlist table charset" grafana | logger=migrator t=2024-09-09T17:02:09.825412463Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=40.171µs grafana | logger=migrator t=2024-09-09T17:02:09.828407006Z level=info msg="Executing migration" id="Update playlist_item table charset" grafana | logger=migrator t=2024-09-09T17:02:09.828443177Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=37.341µs grafana | logger=migrator t=2024-09-09T17:02:09.830954162Z level=info msg="Executing migration" id="Add playlist column created_at" grafana | logger=migrator t=2024-09-09T17:02:09.835112486Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=4.158504ms grafana | logger=migrator t=2024-09-09T17:02:09.838079609Z level=info msg="Executing migration" id="Add playlist column updated_at" grafana | logger=migrator t=2024-09-09T17:02:09.841273247Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=3.195948ms grafana | logger=migrator t=2024-09-09T17:02:09.851998758Z level=info msg="Executing migration" id="drop preferences table v2" grafana | logger=migrator t=2024-09-09T17:02:09.85212209Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=124.112µs grafana | logger=migrator t=2024-09-09T17:02:09.861439807Z level=info msg="Executing migration" id="drop preferences table v3" grafana | logger=migrator t=2024-09-09T17:02:09.861560919Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=130.562µs grafana | logger=migrator t=2024-09-09T17:02:09.872159449Z level=info msg="Executing migration" id="create preferences table v3" grafana | logger=migrator t=2024-09-09T17:02:09.873407841Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=1.247782ms grafana | logger=migrator t=2024-09-09T17:02:09.882107737Z level=info msg="Executing migration" id="Update preferences table charset" grafana | logger=migrator t=2024-09-09T17:02:09.882132667Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=26.04µs grafana | logger=migrator t=2024-09-09T17:02:09.886337053Z level=info msg="Executing migration" id="Add column team_id in preferences" grafana | logger=migrator t=2024-09-09T17:02:09.889366817Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=3.029234ms grafana | logger=migrator t=2024-09-09T17:02:09.897102836Z level=info msg="Executing migration" id="Update team_id column values in preferences" grafana | logger=migrator t=2024-09-09T17:02:09.897245828Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=143.033µs grafana | logger=migrator t=2024-09-09T17:02:09.903484529Z level=info msg="Executing migration" id="Add column week_start in preferences" grafana | logger=migrator t=2024-09-09T17:02:09.908449298Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=4.964089ms grafana | logger=migrator t=2024-09-09T17:02:09.915745298Z level=info msg="Executing migration" id="Add column preferences.json_data" grafana | logger=migrator t=2024-09-09T17:02:09.919429525Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=3.682397ms grafana | logger=migrator t=2024-09-09T17:02:09.923571088Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" grafana | logger=migrator t=2024-09-09T17:02:09.923714961Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=144.673µs grafana | logger=migrator t=2024-09-09T17:02:09.927349486Z level=info msg="Executing migration" id="Add preferences index org_id" grafana | logger=migrator t=2024-09-09T17:02:09.928355114Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=1.005018ms grafana | logger=migrator t=2024-09-09T17:02:09.931998329Z level=info msg="Executing migration" id="Add preferences index user_id" grafana | logger=migrator t=2024-09-09T17:02:09.932946426Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=947.487µs grafana | logger=migrator t=2024-09-09T17:02:09.936566221Z level=info msg="Executing migration" id="create alert table v1" grafana | logger=migrator t=2024-09-09T17:02:09.938255551Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.6894ms grafana | logger=migrator t=2024-09-09T17:02:09.942163791Z level=info msg="Executing migration" id="add index alert org_id & id " grafana | logger=migrator t=2024-09-09T17:02:09.943174479Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=1.028498ms grafana | logger=migrator t=2024-09-09T17:02:09.946665351Z level=info msg="Executing migration" id="add index alert state" grafana | logger=migrator t=2024-09-09T17:02:09.947647439Z level=info msg="Migration successfully executed" id="add index alert state" duration=990.908µs grafana | logger=migrator t=2024-09-09T17:02:09.950527541Z level=info msg="Executing migration" id="add index alert dashboard_id" grafana | logger=migrator t=2024-09-09T17:02:09.951580929Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=1.052998ms grafana | logger=migrator t=2024-09-09T17:02:09.955515419Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" grafana | logger=migrator t=2024-09-09T17:02:09.956264223Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=748.094µs grafana | logger=migrator t=2024-09-09T17:02:09.959567052Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" grafana | logger=migrator t=2024-09-09T17:02:09.96058455Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=1.017248ms grafana | logger=migrator t=2024-09-09T17:02:09.964576882Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" grafana | logger=migrator t=2024-09-09T17:02:09.965744122Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=1.16033ms grafana | logger=migrator t=2024-09-09T17:02:09.96898745Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" grafana | logger=migrator t=2024-09-09T17:02:09.980908503Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=11.909643ms grafana | logger=migrator t=2024-09-09T17:02:10.015267497Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" grafana | logger=migrator t=2024-09-09T17:02:10.016524219Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=1.275452ms grafana | logger=migrator t=2024-09-09T17:02:10.020740474Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" grafana | logger=migrator t=2024-09-09T17:02:10.022378663Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=1.638059ms grafana | logger=migrator t=2024-09-09T17:02:10.053628878Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" grafana | logger=migrator t=2024-09-09T17:02:10.054100108Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=467.889µs grafana | logger=migrator t=2024-09-09T17:02:10.065665853Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" grafana | logger=migrator t=2024-09-09T17:02:10.066328214Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=661.861µs grafana | logger=migrator t=2024-09-09T17:02:10.069510141Z level=info msg="Executing migration" id="create alert_notification table v1" grafana | logger=migrator t=2024-09-09T17:02:10.071328303Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=1.817382ms grafana | logger=migrator t=2024-09-09T17:02:10.075004868Z level=info msg="Executing migration" id="Add column is_default" grafana | logger=migrator t=2024-09-09T17:02:10.079836054Z level=info msg="Migration successfully executed" id="Add column is_default" duration=4.835506ms grafana | logger=migrator t=2024-09-09T17:02:10.083371527Z level=info msg="Executing migration" id="Add column frequency" grafana | logger=migrator t=2024-09-09T17:02:10.087023712Z level=info msg="Migration successfully executed" id="Add column frequency" duration=3.651615ms grafana | logger=migrator t=2024-09-09T17:02:10.090226118Z level=info msg="Executing migration" id="Add column send_reminder" grafana | logger=migrator t=2024-09-09T17:02:10.093858544Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.631965ms grafana | logger=migrator t=2024-09-09T17:02:10.097356275Z level=info msg="Executing migration" id="Add column disable_resolve_message" grafana | logger=migrator t=2024-09-09T17:02:10.105084033Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=7.726728ms grafana | logger=migrator t=2024-09-09T17:02:10.108521344Z level=info msg="Executing migration" id="add index alert_notification org_id & name" grafana | logger=migrator t=2024-09-09T17:02:10.109186445Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=664.941µs grafana | logger=migrator t=2024-09-09T17:02:10.112112967Z level=info msg="Executing migration" id="Update alert table charset" grafana | logger=migrator t=2024-09-09T17:02:10.112129628Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=17.571µs grafana | logger=migrator t=2024-09-09T17:02:10.115464518Z level=info msg="Executing migration" id="Update alert_notification table charset" grafana | logger=migrator t=2024-09-09T17:02:10.115486908Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=23.24µs grafana | logger=migrator t=2024-09-09T17:02:10.119406467Z level=info msg="Executing migration" id="create notification_journal table v1" grafana | logger=migrator t=2024-09-09T17:02:10.12071014Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=1.302773ms grafana | logger=migrator t=2024-09-09T17:02:10.126118136Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" grafana | logger=migrator t=2024-09-09T17:02:10.127042743Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=924.007µs grafana | logger=migrator t=2024-09-09T17:02:10.130298831Z level=info msg="Executing migration" id="drop alert_notification_journal" grafana | logger=migrator t=2024-09-09T17:02:10.131134685Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=835.264µs grafana | logger=migrator t=2024-09-09T17:02:10.134412993Z level=info msg="Executing migration" id="create alert_notification_state table v1" grafana | logger=migrator t=2024-09-09T17:02:10.135428092Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=1.008959ms grafana | logger=migrator t=2024-09-09T17:02:10.140184116Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" grafana | logger=migrator t=2024-09-09T17:02:10.141120863Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=936.467µs grafana | logger=migrator t=2024-09-09T17:02:10.144546734Z level=info msg="Executing migration" id="Add for to alert table" grafana | logger=migrator t=2024-09-09T17:02:10.14827577Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=3.725866ms grafana | logger=migrator t=2024-09-09T17:02:10.151648189Z level=info msg="Executing migration" id="Add column uid in alert_notification" grafana | logger=migrator t=2024-09-09T17:02:10.158194316Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=6.545057ms grafana | logger=migrator t=2024-09-09T17:02:10.163943198Z level=info msg="Executing migration" id="Update uid column values in alert_notification" grafana | logger=migrator t=2024-09-09T17:02:10.164487328Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=543.69µs grafana | logger=migrator t=2024-09-09T17:02:10.168309966Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" grafana | logger=migrator t=2024-09-09T17:02:10.170463024Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=2.154709ms grafana | logger=migrator t=2024-09-09T17:02:10.176120185Z level=info msg="Executing migration" id="Remove unique index org_id_name" grafana | logger=migrator t=2024-09-09T17:02:10.177290456Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=1.178441ms grafana | logger=migrator t=2024-09-09T17:02:10.185994751Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" grafana | logger=migrator t=2024-09-09T17:02:10.190363258Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=4.370137ms grafana | logger=migrator t=2024-09-09T17:02:10.193446372Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" grafana | logger=migrator t=2024-09-09T17:02:10.193528774Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=83.672µs grafana | logger=migrator t=2024-09-09T17:02:10.196991106Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" grafana | logger=migrator t=2024-09-09T17:02:10.198604094Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=1.612728ms grafana | logger=migrator t=2024-09-09T17:02:10.203616093Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" grafana | logger=migrator t=2024-09-09T17:02:10.205026348Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=1.410105ms grafana | logger=migrator t=2024-09-09T17:02:10.208552181Z level=info msg="Executing migration" id="Drop old annotation table v4" grafana | logger=migrator t=2024-09-09T17:02:10.208640923Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=88.892µs grafana | logger=migrator t=2024-09-09T17:02:10.211122527Z level=info msg="Executing migration" id="create annotation table v5" grafana | logger=migrator t=2024-09-09T17:02:10.212076694Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=953.016µs grafana | logger=migrator t=2024-09-09T17:02:10.217169704Z level=info msg="Executing migration" id="add index annotation 0 v3" grafana | logger=migrator t=2024-09-09T17:02:10.219388334Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=2.21318ms grafana | logger=migrator t=2024-09-09T17:02:10.22316105Z level=info msg="Executing migration" id="add index annotation 1 v3" grafana | logger=migrator t=2024-09-09T17:02:10.22481236Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=1.65124ms grafana | logger=migrator t=2024-09-09T17:02:10.228443874Z level=info msg="Executing migration" id="add index annotation 2 v3" grafana | logger=migrator t=2024-09-09T17:02:10.229495803Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=1.050979ms grafana | logger=migrator t=2024-09-09T17:02:10.234931219Z level=info msg="Executing migration" id="add index annotation 3 v3" grafana | logger=migrator t=2024-09-09T17:02:10.236701721Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.770262ms grafana | logger=migrator t=2024-09-09T17:02:10.240545829Z level=info msg="Executing migration" id="add index annotation 4 v3" grafana | logger=migrator t=2024-09-09T17:02:10.24227855Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.733531ms grafana | logger=migrator t=2024-09-09T17:02:10.245563838Z level=info msg="Executing migration" id="Update annotation table charset" grafana | logger=migrator t=2024-09-09T17:02:10.245597829Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=92.432µs grafana | logger=migrator t=2024-09-09T17:02:10.250203641Z level=info msg="Executing migration" id="Add column region_id to annotation table" grafana | logger=migrator t=2024-09-09T17:02:10.254473516Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=4.269405ms grafana | logger=migrator t=2024-09-09T17:02:10.257769885Z level=info msg="Executing migration" id="Drop category_id index" grafana | logger=migrator t=2024-09-09T17:02:10.258696742Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=924.757µs grafana | logger=migrator t=2024-09-09T17:02:10.261701275Z level=info msg="Executing migration" id="Add column tags to annotation table" grafana | logger=migrator t=2024-09-09T17:02:10.265680356Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=3.976221ms grafana | logger=migrator t=2024-09-09T17:02:10.270140076Z level=info msg="Executing migration" id="Create annotation_tag table v2" grafana | logger=migrator t=2024-09-09T17:02:10.270902919Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=762.573µs grafana | logger=migrator t=2024-09-09T17:02:10.274056065Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" grafana | logger=migrator t=2024-09-09T17:02:10.275105393Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=1.049278ms grafana | logger=migrator t=2024-09-09T17:02:10.278183279Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" grafana | logger=migrator t=2024-09-09T17:02:10.279160366Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=977.227µs grafana | logger=migrator t=2024-09-09T17:02:10.284107324Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" grafana | logger=migrator t=2024-09-09T17:02:10.295559047Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=11.450723ms grafana | logger=migrator t=2024-09-09T17:02:10.299383625Z level=info msg="Executing migration" id="Create annotation_tag table v3" grafana | logger=migrator t=2024-09-09T17:02:10.300008375Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=624.341µs grafana | logger=migrator t=2024-09-09T17:02:10.303169612Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" grafana | logger=migrator t=2024-09-09T17:02:10.303917165Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=745.513µs grafana | logger=migrator t=2024-09-09T17:02:10.30870647Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" grafana | logger=migrator t=2024-09-09T17:02:10.309087957Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=381.227µs grafana | logger=migrator t=2024-09-09T17:02:10.312324045Z level=info msg="Executing migration" id="drop table annotation_tag_v2" grafana | logger=migrator t=2024-09-09T17:02:10.312966976Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=645.511µs grafana | logger=migrator t=2024-09-09T17:02:10.316897815Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" grafana | logger=migrator t=2024-09-09T17:02:10.317351784Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=451.739µs grafana | logger=migrator t=2024-09-09T17:02:10.322147119Z level=info msg="Executing migration" id="Add created time to annotation table" grafana | logger=migrator t=2024-09-09T17:02:10.329270786Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=7.121637ms grafana | logger=migrator t=2024-09-09T17:02:10.361516928Z level=info msg="Executing migration" id="Add updated time to annotation table" grafana | logger=migrator t=2024-09-09T17:02:10.368241318Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=6.72411ms grafana | logger=migrator t=2024-09-09T17:02:10.371685899Z level=info msg="Executing migration" id="Add index for created in annotation table" grafana | logger=migrator t=2024-09-09T17:02:10.372449923Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=759.934µs grafana | logger=migrator t=2024-09-09T17:02:10.375524577Z level=info msg="Executing migration" id="Add index for updated in annotation table" grafana | logger=migrator t=2024-09-09T17:02:10.376265341Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=740.704µs grafana | logger=migrator t=2024-09-09T17:02:10.380799761Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" grafana | logger=migrator t=2024-09-09T17:02:10.38131873Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=518.279µs grafana | logger=migrator t=2024-09-09T17:02:10.384890794Z level=info msg="Executing migration" id="Add epoch_end column" grafana | logger=migrator t=2024-09-09T17:02:10.389521076Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=4.631152ms grafana | logger=migrator t=2024-09-09T17:02:10.392811395Z level=info msg="Executing migration" id="Add index for epoch_end" grafana | logger=migrator t=2024-09-09T17:02:10.393862783Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=1.057388ms grafana | logger=migrator t=2024-09-09T17:02:10.398603888Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" grafana | logger=migrator t=2024-09-09T17:02:10.398883013Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=275.325µs grafana | logger=migrator t=2024-09-09T17:02:10.40329552Z level=info msg="Executing migration" id="Move region to single row" grafana | logger=migrator t=2024-09-09T17:02:10.404004274Z level=info msg="Migration successfully executed" id="Move region to single row" duration=708.764µs grafana | logger=migrator t=2024-09-09T17:02:10.407457275Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" grafana | logger=migrator t=2024-09-09T17:02:10.40887695Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=1.419725ms grafana | logger=migrator t=2024-09-09T17:02:10.41231409Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" grafana | logger=migrator t=2024-09-09T17:02:10.413211747Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=897.717µs grafana | logger=migrator t=2024-09-09T17:02:10.418098254Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" grafana | logger=migrator t=2024-09-09T17:02:10.419294155Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=1.195321ms grafana | logger=migrator t=2024-09-09T17:02:10.42237331Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" grafana | logger=migrator t=2024-09-09T17:02:10.423344766Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=971.887µs grafana | logger=migrator t=2024-09-09T17:02:10.426249418Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" grafana | logger=migrator t=2024-09-09T17:02:10.427161335Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=911.947µs grafana | logger=migrator t=2024-09-09T17:02:10.432047932Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" grafana | logger=migrator t=2024-09-09T17:02:10.432987928Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=937.196µs grafana | logger=migrator t=2024-09-09T17:02:10.436315497Z level=info msg="Executing migration" id="Increase tags column to length 4096" grafana | logger=migrator t=2024-09-09T17:02:10.43649183Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=175.863µs grafana | logger=migrator t=2024-09-09T17:02:10.439247739Z level=info msg="Executing migration" id="create test_data table" grafana | logger=migrator t=2024-09-09T17:02:10.440604094Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.356215ms grafana | logger=migrator t=2024-09-09T17:02:10.445339877Z level=info msg="Executing migration" id="create dashboard_version table v1" grafana | logger=migrator t=2024-09-09T17:02:10.446276385Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=936.147µs grafana | logger=migrator t=2024-09-09T17:02:10.449269308Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" grafana | logger=migrator t=2024-09-09T17:02:10.450262775Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=992.747µs grafana | logger=migrator t=2024-09-09T17:02:10.45334793Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" grafana | logger=migrator t=2024-09-09T17:02:10.454317306Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=969.426µs grafana | logger=migrator t=2024-09-09T17:02:10.458868438Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" grafana | logger=migrator t=2024-09-09T17:02:10.459158873Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=290.065µs grafana | logger=migrator t=2024-09-09T17:02:10.462565203Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" grafana | logger=migrator t=2024-09-09T17:02:10.463257246Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=691.783µs grafana | logger=migrator t=2024-09-09T17:02:10.466904311Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" grafana | logger=migrator t=2024-09-09T17:02:10.467171936Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=267.355µs grafana | logger=migrator t=2024-09-09T17:02:10.471490423Z level=info msg="Executing migration" id="create team table" grafana | logger=migrator t=2024-09-09T17:02:10.472292837Z level=info msg="Migration successfully executed" id="create team table" duration=801.974µs grafana | logger=migrator t=2024-09-09T17:02:10.475789369Z level=info msg="Executing migration" id="add index team.org_id" grafana | logger=migrator t=2024-09-09T17:02:10.476784376Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=994.517µs grafana | logger=migrator t=2024-09-09T17:02:10.4803544Z level=info msg="Executing migration" id="add unique index team_org_id_name" grafana | logger=migrator t=2024-09-09T17:02:10.481306367Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=951.397µs grafana | logger=migrator t=2024-09-09T17:02:10.485697025Z level=info msg="Executing migration" id="Add column uid in team" grafana | logger=migrator t=2024-09-09T17:02:10.490383368Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=4.685683ms grafana | logger=migrator t=2024-09-09T17:02:10.493820289Z level=info msg="Executing migration" id="Update uid column values in team" grafana | logger=migrator t=2024-09-09T17:02:10.494136535Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=315.876µs grafana | logger=migrator t=2024-09-09T17:02:10.497540815Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" grafana | logger=migrator t=2024-09-09T17:02:10.498514422Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=973.417µs grafana | logger=migrator t=2024-09-09T17:02:10.502110617Z level=info msg="Executing migration" id="create team member table" grafana | logger=migrator t=2024-09-09T17:02:10.502988862Z level=info msg="Migration successfully executed" id="create team member table" duration=877.785µs grafana | logger=migrator t=2024-09-09T17:02:10.507001883Z level=info msg="Executing migration" id="add index team_member.org_id" grafana | logger=migrator t=2024-09-09T17:02:10.508034722Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.032289ms grafana | logger=migrator t=2024-09-09T17:02:10.51130163Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" grafana | logger=migrator t=2024-09-09T17:02:10.512280817Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=976.277µs grafana | logger=migrator t=2024-09-09T17:02:10.515709437Z level=info msg="Executing migration" id="add index team_member.team_id" grafana | logger=migrator t=2024-09-09T17:02:10.516743206Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.033179ms grafana | logger=migrator t=2024-09-09T17:02:10.521045212Z level=info msg="Executing migration" id="Add column email to team table" grafana | logger=migrator t=2024-09-09T17:02:10.525817127Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=4.771015ms grafana | logger=migrator t=2024-09-09T17:02:10.529510223Z level=info msg="Executing migration" id="Add column external to team_member table" grafana | logger=migrator t=2024-09-09T17:02:10.534488941Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=4.978038ms grafana | logger=migrator t=2024-09-09T17:02:10.538128056Z level=info msg="Executing migration" id="Add column permission to team_member table" grafana | logger=migrator t=2024-09-09T17:02:10.543173005Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=5.044179ms grafana | logger=migrator t=2024-09-09T17:02:10.547650445Z level=info msg="Executing migration" id="create dashboard acl table" grafana | logger=migrator t=2024-09-09T17:02:10.548773995Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=1.12262ms grafana | logger=migrator t=2024-09-09T17:02:10.553933717Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" grafana | logger=migrator t=2024-09-09T17:02:10.555699758Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.769871ms grafana | logger=migrator t=2024-09-09T17:02:10.558785823Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" grafana | logger=migrator t=2024-09-09T17:02:10.559709639Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=923.606µs grafana | logger=migrator t=2024-09-09T17:02:10.563856204Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" grafana | logger=migrator t=2024-09-09T17:02:10.564744789Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=885.285µs grafana | logger=migrator t=2024-09-09T17:02:10.570057814Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" grafana | logger=migrator t=2024-09-09T17:02:10.570934019Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=875.755µs grafana | logger=migrator t=2024-09-09T17:02:10.57379171Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" grafana | logger=migrator t=2024-09-09T17:02:10.575170735Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.379955ms grafana | logger=migrator t=2024-09-09T17:02:10.57829541Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" grafana | logger=migrator t=2024-09-09T17:02:10.579682304Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=1.386174ms grafana | logger=migrator t=2024-09-09T17:02:10.584861086Z level=info msg="Executing migration" id="add index dashboard_permission" grafana | logger=migrator t=2024-09-09T17:02:10.585749342Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=885.966µs grafana | logger=migrator t=2024-09-09T17:02:10.588567422Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" grafana | logger=migrator t=2024-09-09T17:02:10.589027671Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=460.159µs grafana | logger=migrator t=2024-09-09T17:02:10.593233535Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" grafana | logger=migrator t=2024-09-09T17:02:10.593600712Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=366.497µs grafana | logger=migrator t=2024-09-09T17:02:10.5969202Z level=info msg="Executing migration" id="create tag table" grafana | logger=migrator t=2024-09-09T17:02:10.598027831Z level=info msg="Migration successfully executed" id="create tag table" duration=1.110581ms grafana | logger=migrator t=2024-09-09T17:02:10.602431218Z level=info msg="Executing migration" id="add index tag.key_value" grafana | logger=migrator t=2024-09-09T17:02:10.603344355Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=914.837µs grafana | logger=migrator t=2024-09-09T17:02:10.606517921Z level=info msg="Executing migration" id="create login attempt table" grafana | logger=migrator t=2024-09-09T17:02:10.607225884Z level=info msg="Migration successfully executed" id="create login attempt table" duration=707.713µs grafana | logger=migrator t=2024-09-09T17:02:10.610510602Z level=info msg="Executing migration" id="add index login_attempt.username" grafana | logger=migrator t=2024-09-09T17:02:10.611423679Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=912.467µs grafana | logger=migrator t=2024-09-09T17:02:10.61543676Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" grafana | logger=migrator t=2024-09-09T17:02:10.616302035Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=865.015µs grafana | logger=migrator t=2024-09-09T17:02:10.619519682Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" grafana | logger=migrator t=2024-09-09T17:02:10.63404319Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=14.523028ms grafana | logger=migrator t=2024-09-09T17:02:10.637440371Z level=info msg="Executing migration" id="create login_attempt v2" grafana | logger=migrator t=2024-09-09T17:02:10.638164973Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=721.972µs grafana | logger=migrator t=2024-09-09T17:02:10.642506811Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" grafana | logger=migrator t=2024-09-09T17:02:10.643375656Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=870.505µs grafana | logger=migrator t=2024-09-09T17:02:10.713326249Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" grafana | logger=migrator t=2024-09-09T17:02:10.713764446Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=438.387µs grafana | logger=migrator t=2024-09-09T17:02:10.717160467Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" grafana | logger=migrator t=2024-09-09T17:02:10.718082014Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=921.087µs grafana | logger=migrator t=2024-09-09T17:02:10.722332109Z level=info msg="Executing migration" id="create user auth table" grafana | logger=migrator t=2024-09-09T17:02:10.723058751Z level=info msg="Migration successfully executed" id="create user auth table" duration=726.153µs grafana | logger=migrator t=2024-09-09T17:02:10.726256579Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" grafana | logger=migrator t=2024-09-09T17:02:10.727189005Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=931.566µs grafana | logger=migrator t=2024-09-09T17:02:10.73029193Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" grafana | logger=migrator t=2024-09-09T17:02:10.730351301Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=59.471µs grafana | logger=migrator t=2024-09-09T17:02:10.734100957Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" grafana | logger=migrator t=2024-09-09T17:02:10.739087226Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=4.985569ms grafana | logger=migrator t=2024-09-09T17:02:10.742495436Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" grafana | logger=migrator t=2024-09-09T17:02:10.747544556Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=5.05122ms grafana | logger=migrator t=2024-09-09T17:02:10.751064809Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" grafana | logger=migrator t=2024-09-09T17:02:10.756039267Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=4.971878ms grafana | logger=migrator t=2024-09-09T17:02:10.760298474Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" grafana | logger=migrator t=2024-09-09T17:02:10.765351883Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=5.05278ms grafana | logger=migrator t=2024-09-09T17:02:10.769002798Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" grafana | logger=migrator t=2024-09-09T17:02:10.769938735Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=935.857µs grafana | logger=migrator t=2024-09-09T17:02:10.773557989Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" grafana | logger=migrator t=2024-09-09T17:02:10.781449129Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=7.88938ms grafana | logger=migrator t=2024-09-09T17:02:10.785841897Z level=info msg="Executing migration" id="create server_lock table" grafana | logger=migrator t=2024-09-09T17:02:10.786425898Z level=info msg="Migration successfully executed" id="create server_lock table" duration=583.35µs grafana | logger=migrator t=2024-09-09T17:02:10.789928769Z level=info msg="Executing migration" id="add index server_lock.operation_uid" grafana | logger=migrator t=2024-09-09T17:02:10.790605592Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=676.573µs grafana | logger=migrator t=2024-09-09T17:02:10.795444447Z level=info msg="Executing migration" id="create user auth token table" grafana | logger=migrator t=2024-09-09T17:02:10.796774132Z level=info msg="Migration successfully executed" id="create user auth token table" duration=1.326925ms grafana | logger=migrator t=2024-09-09T17:02:10.801302851Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" grafana | logger=migrator t=2024-09-09T17:02:10.80287535Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.572059ms grafana | logger=migrator t=2024-09-09T17:02:10.806197409Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" grafana | logger=migrator t=2024-09-09T17:02:10.807645775Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.447056ms grafana | logger=migrator t=2024-09-09T17:02:10.810883812Z level=info msg="Executing migration" id="add index user_auth_token.user_id" grafana | logger=migrator t=2024-09-09T17:02:10.811831569Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=947.147µs grafana | logger=migrator t=2024-09-09T17:02:10.815650686Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" grafana | logger=migrator t=2024-09-09T17:02:10.821009951Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=5.357855ms grafana | logger=migrator t=2024-09-09T17:02:10.824073126Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" grafana | logger=migrator t=2024-09-09T17:02:10.824961212Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=887.426µs grafana | logger=migrator t=2024-09-09T17:02:10.828182059Z level=info msg="Executing migration" id="create cache_data table" grafana | logger=migrator t=2024-09-09T17:02:10.828997303Z level=info msg="Migration successfully executed" id="create cache_data table" duration=811.994µs grafana | logger=migrator t=2024-09-09T17:02:10.832894303Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" grafana | logger=migrator t=2024-09-09T17:02:10.83386472Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=967.547µs grafana | logger=migrator t=2024-09-09T17:02:10.836771042Z level=info msg="Executing migration" id="create short_url table v1" grafana | logger=migrator t=2024-09-09T17:02:10.838132375Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=1.360773ms grafana | logger=migrator t=2024-09-09T17:02:10.841528337Z level=info msg="Executing migration" id="add index short_url.org_id-uid" grafana | logger=migrator t=2024-09-09T17:02:10.843014232Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=1.484706ms grafana | logger=migrator t=2024-09-09T17:02:10.847408311Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" grafana | logger=migrator t=2024-09-09T17:02:10.847475282Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=67.371µs grafana | logger=migrator t=2024-09-09T17:02:10.850502596Z level=info msg="Executing migration" id="delete alert_definition table" grafana | logger=migrator t=2024-09-09T17:02:10.850601657Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=99.041µs grafana | logger=migrator t=2024-09-09T17:02:10.853884236Z level=info msg="Executing migration" id="recreate alert_definition table" grafana | logger=migrator t=2024-09-09T17:02:10.85523085Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.345264ms grafana | logger=migrator t=2024-09-09T17:02:10.858376006Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" grafana | logger=migrator t=2024-09-09T17:02:10.859885942Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.508756ms grafana | logger=migrator t=2024-09-09T17:02:10.864004006Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2024-09-09T17:02:10.864926912Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=922.426µs grafana | logger=migrator t=2024-09-09T17:02:10.868979704Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" grafana | logger=migrator t=2024-09-09T17:02:10.869052126Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=72.832µs grafana | logger=migrator t=2024-09-09T17:02:10.872397335Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" grafana | logger=migrator t=2024-09-09T17:02:10.87383661Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.438315ms grafana | logger=migrator t=2024-09-09T17:02:10.878043806Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2024-09-09T17:02:10.878954141Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=910.356µs grafana | logger=migrator t=2024-09-09T17:02:10.882060876Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" grafana | logger=migrator t=2024-09-09T17:02:10.883023273Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=962.027µs grafana | logger=migrator t=2024-09-09T17:02:10.886996504Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2024-09-09T17:02:10.8879203Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=923.856µs grafana | logger=migrator t=2024-09-09T17:02:10.891204109Z level=info msg="Executing migration" id="Add column paused in alert_definition" grafana | logger=migrator t=2024-09-09T17:02:10.896719167Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=5.513967ms grafana | logger=migrator t=2024-09-09T17:02:10.899945565Z level=info msg="Executing migration" id="drop alert_definition table" grafana | logger=migrator t=2024-09-09T17:02:10.90080403Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=857.355µs grafana | logger=migrator t=2024-09-09T17:02:10.905773838Z level=info msg="Executing migration" id="delete alert_definition_version table" grafana | logger=migrator t=2024-09-09T17:02:10.90585317Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=79.781µs grafana | logger=migrator t=2024-09-09T17:02:10.909301971Z level=info msg="Executing migration" id="recreate alert_definition_version table" grafana | logger=migrator t=2024-09-09T17:02:10.910765076Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.461906ms grafana | logger=migrator t=2024-09-09T17:02:10.91433716Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" grafana | logger=migrator t=2024-09-09T17:02:10.915309977Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=972.327µs grafana | logger=migrator t=2024-09-09T17:02:10.918834309Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" grafana | logger=migrator t=2024-09-09T17:02:10.920688983Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.857044ms grafana | logger=migrator t=2024-09-09T17:02:10.923961371Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" grafana | logger=migrator t=2024-09-09T17:02:10.924026802Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=66.301µs grafana | logger=migrator t=2024-09-09T17:02:10.928274608Z level=info msg="Executing migration" id="drop alert_definition_version table" grafana | logger=migrator t=2024-09-09T17:02:10.929313236Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.038468ms grafana | logger=migrator t=2024-09-09T17:02:10.93237053Z level=info msg="Executing migration" id="create alert_instance table" grafana | logger=migrator t=2024-09-09T17:02:10.934003189Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.631599ms grafana | logger=migrator t=2024-09-09T17:02:10.937308358Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" grafana | logger=migrator t=2024-09-09T17:02:10.938927276Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.618268ms grafana | logger=migrator t=2024-09-09T17:02:10.94474143Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" grafana | logger=migrator t=2024-09-09T17:02:10.945785779Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.043558ms grafana | logger=migrator t=2024-09-09T17:02:10.948748991Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" grafana | logger=migrator t=2024-09-09T17:02:10.958109297Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=9.359136ms grafana | logger=migrator t=2024-09-09T17:02:10.96107513Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" grafana | logger=migrator t=2024-09-09T17:02:10.961802194Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=727.374µs grafana | logger=migrator t=2024-09-09T17:02:10.965912956Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" grafana | logger=migrator t=2024-09-09T17:02:10.966655199Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=741.963µs grafana | logger=migrator t=2024-09-09T17:02:10.969319477Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" grafana | logger=migrator t=2024-09-09T17:02:10.996307176Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=26.98968ms grafana | logger=migrator t=2024-09-09T17:02:11.064941335Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" grafana | logger=migrator t=2024-09-09T17:02:11.091442276Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=26.504321ms grafana | logger=migrator t=2024-09-09T17:02:11.094928998Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" grafana | logger=migrator t=2024-09-09T17:02:11.096027268Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=1.09737ms grafana | logger=migrator t=2024-09-09T17:02:11.099257195Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" grafana | logger=migrator t=2024-09-09T17:02:11.100317814Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=1.059798ms grafana | logger=migrator t=2024-09-09T17:02:11.10460687Z level=info msg="Executing migration" id="add current_reason column related to current_state" grafana | logger=migrator t=2024-09-09T17:02:11.110573736Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=5.966756ms grafana | logger=migrator t=2024-09-09T17:02:11.116257407Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" grafana | logger=migrator t=2024-09-09T17:02:11.121910777Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=5.65338ms grafana | logger=migrator t=2024-09-09T17:02:11.125736105Z level=info msg="Executing migration" id="create alert_rule table" grafana | logger=migrator t=2024-09-09T17:02:11.126695672Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=959.897µs grafana | logger=migrator t=2024-09-09T17:02:11.131306744Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" grafana | logger=migrator t=2024-09-09T17:02:11.132888852Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.580568ms grafana | logger=migrator t=2024-09-09T17:02:11.136475076Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" grafana | logger=migrator t=2024-09-09T17:02:11.138073525Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.597889ms grafana | logger=migrator t=2024-09-09T17:02:11.142972841Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" grafana | logger=migrator t=2024-09-09T17:02:11.144832384Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.859493ms grafana | logger=migrator t=2024-09-09T17:02:11.149731112Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" grafana | logger=migrator t=2024-09-09T17:02:11.149798923Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=67.952µs grafana | logger=migrator t=2024-09-09T17:02:11.153721352Z level=info msg="Executing migration" id="add column for to alert_rule" grafana | logger=migrator t=2024-09-09T17:02:11.163334423Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=9.616491ms grafana | logger=migrator t=2024-09-09T17:02:11.167524087Z level=info msg="Executing migration" id="add column annotations to alert_rule" grafana | logger=migrator t=2024-09-09T17:02:11.173844659Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=6.320002ms grafana | logger=migrator t=2024-09-09T17:02:11.177333602Z level=info msg="Executing migration" id="add column labels to alert_rule" grafana | logger=migrator t=2024-09-09T17:02:11.18342003Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=6.085888ms grafana | logger=migrator t=2024-09-09T17:02:11.186788279Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" grafana | logger=migrator t=2024-09-09T17:02:11.187812107Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=1.004498ms grafana | logger=migrator t=2024-09-09T17:02:11.196301718Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" grafana | logger=migrator t=2024-09-09T17:02:11.198245853Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.948075ms grafana | logger=migrator t=2024-09-09T17:02:11.201637523Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" grafana | logger=migrator t=2024-09-09T17:02:11.209568334Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=7.931351ms grafana | logger=migrator t=2024-09-09T17:02:11.213461303Z level=info msg="Executing migration" id="add panel_id column to alert_rule" grafana | logger=migrator t=2024-09-09T17:02:11.219546202Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=6.084569ms grafana | logger=migrator t=2024-09-09T17:02:11.225799723Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" grafana | logger=migrator t=2024-09-09T17:02:11.2267892Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=989.097µs grafana | logger=migrator t=2024-09-09T17:02:11.229720482Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" grafana | logger=migrator t=2024-09-09T17:02:11.235659548Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=5.938595ms grafana | logger=migrator t=2024-09-09T17:02:11.238771073Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" grafana | logger=migrator t=2024-09-09T17:02:11.244685359Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=5.910786ms grafana | logger=migrator t=2024-09-09T17:02:11.250031824Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" grafana | logger=migrator t=2024-09-09T17:02:11.250094565Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=63.061µs grafana | logger=migrator t=2024-09-09T17:02:11.253092568Z level=info msg="Executing migration" id="create alert_rule_version table" grafana | logger=migrator t=2024-09-09T17:02:11.254166037Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.071949ms grafana | logger=migrator t=2024-09-09T17:02:11.257566487Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" grafana | logger=migrator t=2024-09-09T17:02:11.259122705Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.555598ms grafana | logger=migrator t=2024-09-09T17:02:11.264175125Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" grafana | logger=migrator t=2024-09-09T17:02:11.265202003Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.026598ms grafana | logger=migrator t=2024-09-09T17:02:11.268078364Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" grafana | logger=migrator t=2024-09-09T17:02:11.268139085Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=61.231µs grafana | logger=migrator t=2024-09-09T17:02:11.270339374Z level=info msg="Executing migration" id="add column for to alert_rule_version" grafana | logger=migrator t=2024-09-09T17:02:11.276438602Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=6.098418ms grafana | logger=migrator t=2024-09-09T17:02:11.281258758Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" grafana | logger=migrator t=2024-09-09T17:02:11.287393857Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=6.134739ms grafana | logger=migrator t=2024-09-09T17:02:11.290180626Z level=info msg="Executing migration" id="add column labels to alert_rule_version" grafana | logger=migrator t=2024-09-09T17:02:11.296301145Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=6.119949ms grafana | logger=migrator t=2024-09-09T17:02:11.299181006Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" grafana | logger=migrator t=2024-09-09T17:02:11.30558975Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=6.411374ms grafana | logger=migrator t=2024-09-09T17:02:11.310309624Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" grafana | logger=migrator t=2024-09-09T17:02:11.317061893Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=6.751459ms grafana | logger=migrator t=2024-09-09T17:02:11.320115308Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" grafana | logger=migrator t=2024-09-09T17:02:11.32018065Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=65.832µs grafana | logger=migrator t=2024-09-09T17:02:11.322348467Z level=info msg="Executing migration" id=create_alert_configuration_table grafana | logger=migrator t=2024-09-09T17:02:11.323140042Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=791.175µs grafana | logger=migrator t=2024-09-09T17:02:11.325972862Z level=info msg="Executing migration" id="Add column default in alert_configuration" grafana | logger=migrator t=2024-09-09T17:02:11.332160022Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=6.18905ms grafana | logger=migrator t=2024-09-09T17:02:11.336915427Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" grafana | logger=migrator t=2024-09-09T17:02:11.336998198Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=83.041µs grafana | logger=migrator t=2024-09-09T17:02:11.339690626Z level=info msg="Executing migration" id="add column org_id in alert_configuration" grafana | logger=migrator t=2024-09-09T17:02:11.348217928Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=8.525481ms grafana | logger=migrator t=2024-09-09T17:02:11.41253198Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" grafana | logger=migrator t=2024-09-09T17:02:11.414159268Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=1.626628ms grafana | logger=migrator t=2024-09-09T17:02:11.421636501Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" grafana | logger=migrator t=2024-09-09T17:02:11.427832241Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=6.19585ms grafana | logger=migrator t=2024-09-09T17:02:11.430708082Z level=info msg="Executing migration" id=create_ngalert_configuration_table grafana | logger=migrator t=2024-09-09T17:02:11.432032725Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=1.324343ms grafana | logger=migrator t=2024-09-09T17:02:11.436420264Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" grafana | logger=migrator t=2024-09-09T17:02:11.438010112Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.589728ms grafana | logger=migrator t=2024-09-09T17:02:11.444168061Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" grafana | logger=migrator t=2024-09-09T17:02:11.44911495Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=4.950089ms grafana | logger=migrator t=2024-09-09T17:02:11.453491227Z level=info msg="Executing migration" id="create provenance_type table" grafana | logger=migrator t=2024-09-09T17:02:11.454315131Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=823.824µs grafana | logger=migrator t=2024-09-09T17:02:11.460771996Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" grafana | logger=migrator t=2024-09-09T17:02:11.462415235Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.642439ms grafana | logger=migrator t=2024-09-09T17:02:11.467007057Z level=info msg="Executing migration" id="create alert_image table" grafana | logger=migrator t=2024-09-09T17:02:11.4683584Z level=info msg="Migration successfully executed" id="create alert_image table" duration=1.350483ms grafana | logger=migrator t=2024-09-09T17:02:11.474764525Z level=info msg="Executing migration" id="add unique index on token to alert_image table" grafana | logger=migrator t=2024-09-09T17:02:11.475837184Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.073559ms grafana | logger=migrator t=2024-09-09T17:02:11.479450678Z level=info msg="Executing migration" id="support longer URLs in alert_image table" grafana | logger=migrator t=2024-09-09T17:02:11.479515539Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=65.072µs grafana | logger=migrator t=2024-09-09T17:02:11.483028872Z level=info msg="Executing migration" id=create_alert_configuration_history_table grafana | logger=migrator t=2024-09-09T17:02:11.484479477Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=1.449145ms grafana | logger=migrator t=2024-09-09T17:02:11.492071232Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" grafana | logger=migrator t=2024-09-09T17:02:11.493551928Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=1.479476ms grafana | logger=migrator t=2024-09-09T17:02:11.497935796Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" grafana | logger=migrator t=2024-09-09T17:02:11.498569777Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" grafana | logger=migrator t=2024-09-09T17:02:11.502968626Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" grafana | logger=migrator t=2024-09-09T17:02:11.503614337Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=644.451µs grafana | logger=migrator t=2024-09-09T17:02:11.508408302Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" grafana | logger=migrator t=2024-09-09T17:02:11.509947319Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.538897ms grafana | logger=migrator t=2024-09-09T17:02:11.513411742Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" grafana | logger=migrator t=2024-09-09T17:02:11.520002668Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=6.589476ms grafana | logger=migrator t=2024-09-09T17:02:11.526075846Z level=info msg="Executing migration" id="create library_element table v1" grafana | logger=migrator t=2024-09-09T17:02:11.526799199Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=723.003µs grafana | logger=migrator t=2024-09-09T17:02:11.535506783Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" grafana | logger=migrator t=2024-09-09T17:02:11.537170953Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.66306ms grafana | logger=migrator t=2024-09-09T17:02:11.541527071Z level=info msg="Executing migration" id="create library_element_connection table v1" grafana | logger=migrator t=2024-09-09T17:02:11.542815153Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=1.287642ms grafana | logger=migrator t=2024-09-09T17:02:11.547396485Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" grafana | logger=migrator t=2024-09-09T17:02:11.548386702Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=989.947µs grafana | logger=migrator t=2024-09-09T17:02:11.551460387Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" grafana | logger=migrator t=2024-09-09T17:02:11.552415753Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=954.446µs grafana | logger=migrator t=2024-09-09T17:02:11.556069249Z level=info msg="Executing migration" id="increase max description length to 2048" grafana | logger=migrator t=2024-09-09T17:02:11.55609569Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=27.741µs grafana | logger=migrator t=2024-09-09T17:02:11.559198375Z level=info msg="Executing migration" id="alter library_element model to mediumtext" grafana | logger=migrator t=2024-09-09T17:02:11.559262376Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=64.411µs grafana | logger=migrator t=2024-09-09T17:02:11.568042002Z level=info msg="Executing migration" id="add library_element folder uid" grafana | logger=migrator t=2024-09-09T17:02:11.577147673Z level=info msg="Migration successfully executed" id="add library_element folder uid" duration=9.105671ms grafana | logger=migrator t=2024-09-09T17:02:11.584166898Z level=info msg="Executing migration" id="populate library_element folder_uid" grafana | logger=migrator t=2024-09-09T17:02:11.584536845Z level=info msg="Migration successfully executed" id="populate library_element folder_uid" duration=369.687µs grafana | logger=migrator t=2024-09-09T17:02:11.586903477Z level=info msg="Executing migration" id="add index library_element org_id-folder_uid-name-kind" grafana | logger=migrator t=2024-09-09T17:02:11.587964056Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_uid-name-kind" duration=1.050068ms grafana | logger=migrator t=2024-09-09T17:02:11.59326579Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" grafana | logger=migrator t=2024-09-09T17:02:11.593679877Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=414.477µs grafana | logger=migrator t=2024-09-09T17:02:11.598803958Z level=info msg="Executing migration" id="create data_keys table" grafana | logger=migrator t=2024-09-09T17:02:11.600282495Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.477867ms grafana | logger=migrator t=2024-09-09T17:02:11.605591548Z level=info msg="Executing migration" id="create secrets table" grafana | logger=migrator t=2024-09-09T17:02:11.606894022Z level=info msg="Migration successfully executed" id="create secrets table" duration=1.301614ms grafana | logger=migrator t=2024-09-09T17:02:11.612349679Z level=info msg="Executing migration" id="rename data_keys name column to id" grafana | logger=migrator t=2024-09-09T17:02:11.644081502Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=31.731513ms grafana | logger=migrator t=2024-09-09T17:02:11.647917661Z level=info msg="Executing migration" id="add name column into data_keys" grafana | logger=migrator t=2024-09-09T17:02:11.654853714Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=6.934933ms grafana | logger=migrator t=2024-09-09T17:02:11.657942379Z level=info msg="Executing migration" id="copy data_keys id column values into name" grafana | logger=migrator t=2024-09-09T17:02:11.658081851Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=139.403µs grafana | logger=migrator t=2024-09-09T17:02:11.661271687Z level=info msg="Executing migration" id="rename data_keys name column to label" grafana | logger=migrator t=2024-09-09T17:02:11.693423049Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=32.151362ms grafana | logger=migrator t=2024-09-09T17:02:11.698886446Z level=info msg="Executing migration" id="rename data_keys id column back to name" grafana | logger=migrator t=2024-09-09T17:02:11.728894458Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=30.007952ms grafana | logger=migrator t=2024-09-09T17:02:11.775444286Z level=info msg="Executing migration" id="create kv_store table v1" grafana | logger=migrator t=2024-09-09T17:02:11.7767845Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=1.340195ms grafana | logger=migrator t=2024-09-09T17:02:11.783262795Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" grafana | logger=migrator t=2024-09-09T17:02:11.784926494Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.663359ms grafana | logger=migrator t=2024-09-09T17:02:11.789339422Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" grafana | logger=migrator t=2024-09-09T17:02:11.789581797Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=242.125µs grafana | logger=migrator t=2024-09-09T17:02:11.792759453Z level=info msg="Executing migration" id="create permission table" grafana | logger=migrator t=2024-09-09T17:02:11.793592748Z level=info msg="Migration successfully executed" id="create permission table" duration=833.135µs grafana | logger=migrator t=2024-09-09T17:02:11.796874647Z level=info msg="Executing migration" id="add unique index permission.role_id" grafana | logger=migrator t=2024-09-09T17:02:11.798399543Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.521307ms grafana | logger=migrator t=2024-09-09T17:02:11.804190406Z level=info msg="Executing migration" id="add unique index role_id_action_scope" grafana | logger=migrator t=2024-09-09T17:02:11.805826055Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.636199ms grafana | logger=migrator t=2024-09-09T17:02:11.810171282Z level=info msg="Executing migration" id="create role table" grafana | logger=migrator t=2024-09-09T17:02:11.811356094Z level=info msg="Migration successfully executed" id="create role table" duration=1.185912ms grafana | logger=migrator t=2024-09-09T17:02:11.814261795Z level=info msg="Executing migration" id="add column display_name" grafana | logger=migrator t=2024-09-09T17:02:11.821659597Z level=info msg="Migration successfully executed" id="add column display_name" duration=7.396882ms grafana | logger=migrator t=2024-09-09T17:02:11.830762208Z level=info msg="Executing migration" id="add column group_name" grafana | logger=migrator t=2024-09-09T17:02:11.841768754Z level=info msg="Migration successfully executed" id="add column group_name" duration=11.007526ms grafana | logger=migrator t=2024-09-09T17:02:11.844665785Z level=info msg="Executing migration" id="add index role.org_id" grafana | logger=migrator t=2024-09-09T17:02:11.845421108Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=754.743µs grafana | logger=migrator t=2024-09-09T17:02:11.848451362Z level=info msg="Executing migration" id="add unique index role_org_id_name" grafana | logger=migrator t=2024-09-09T17:02:11.849213205Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=761.303µs grafana | logger=migrator t=2024-09-09T17:02:11.852399263Z level=info msg="Executing migration" id="add index role_org_id_uid" grafana | logger=migrator t=2024-09-09T17:02:11.853417631Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.017698ms grafana | logger=migrator t=2024-09-09T17:02:11.858828307Z level=info msg="Executing migration" id="create team role table" grafana | logger=migrator t=2024-09-09T17:02:11.860065668Z level=info msg="Migration successfully executed" id="create team role table" duration=1.235661ms grafana | logger=migrator t=2024-09-09T17:02:11.866637126Z level=info msg="Executing migration" id="add index team_role.org_id" grafana | logger=migrator t=2024-09-09T17:02:11.868288964Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.650848ms grafana | logger=migrator t=2024-09-09T17:02:11.872765054Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" grafana | logger=migrator t=2024-09-09T17:02:11.873841843Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.082359ms grafana | logger=migrator t=2024-09-09T17:02:11.876883697Z level=info msg="Executing migration" id="add index team_role.team_id" grafana | logger=migrator t=2024-09-09T17:02:11.877929926Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.045389ms grafana | logger=migrator t=2024-09-09T17:02:11.881413558Z level=info msg="Executing migration" id="create user role table" grafana | logger=migrator t=2024-09-09T17:02:11.882210682Z level=info msg="Migration successfully executed" id="create user role table" duration=796.524µs grafana | logger=migrator t=2024-09-09T17:02:11.886747673Z level=info msg="Executing migration" id="add index user_role.org_id" grafana | logger=migrator t=2024-09-09T17:02:11.888259849Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.507516ms grafana | logger=migrator t=2024-09-09T17:02:11.892616497Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" grafana | logger=migrator t=2024-09-09T17:02:11.894308046Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.690599ms grafana | logger=migrator t=2024-09-09T17:02:11.897484494Z level=info msg="Executing migration" id="add index user_role.user_id" grafana | logger=migrator t=2024-09-09T17:02:11.898582693Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.076648ms grafana | logger=migrator t=2024-09-09T17:02:11.90347297Z level=info msg="Executing migration" id="create builtin role table" grafana | logger=migrator t=2024-09-09T17:02:11.904455927Z level=info msg="Migration successfully executed" id="create builtin role table" duration=981.637µs grafana | logger=migrator t=2024-09-09T17:02:11.908474699Z level=info msg="Executing migration" id="add index builtin_role.role_id" grafana | logger=migrator t=2024-09-09T17:02:11.910170419Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.69481ms grafana | logger=migrator t=2024-09-09T17:02:11.914452295Z level=info msg="Executing migration" id="add index builtin_role.name" grafana | logger=migrator t=2024-09-09T17:02:11.915509954Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.058039ms grafana | logger=migrator t=2024-09-09T17:02:11.920714166Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" grafana | logger=migrator t=2024-09-09T17:02:11.930887987Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=10.175641ms grafana | logger=migrator t=2024-09-09T17:02:11.934001273Z level=info msg="Executing migration" id="add index builtin_role.org_id" grafana | logger=migrator t=2024-09-09T17:02:11.934750856Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=750.143µs grafana | logger=migrator t=2024-09-09T17:02:11.941572406Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" grafana | logger=migrator t=2024-09-09T17:02:11.942621706Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.04874ms grafana | logger=migrator t=2024-09-09T17:02:11.948291336Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" grafana | logger=migrator t=2024-09-09T17:02:11.949858404Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.566558ms grafana | logger=migrator t=2024-09-09T17:02:11.953184223Z level=info msg="Executing migration" id="add unique index role.uid" grafana | logger=migrator t=2024-09-09T17:02:11.954799212Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.614228ms grafana | logger=migrator t=2024-09-09T17:02:11.960001914Z level=info msg="Executing migration" id="create seed assignment table" grafana | logger=migrator t=2024-09-09T17:02:11.960786928Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=784.204µs grafana | logger=migrator t=2024-09-09T17:02:11.963883093Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" grafana | logger=migrator t=2024-09-09T17:02:11.965482331Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.597428ms grafana | logger=migrator t=2024-09-09T17:02:11.969050195Z level=info msg="Executing migration" id="add column hidden to role table" grafana | logger=migrator t=2024-09-09T17:02:11.978396381Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=9.346626ms grafana | logger=migrator t=2024-09-09T17:02:11.98733444Z level=info msg="Executing migration" id="permission kind migration" grafana | logger=migrator t=2024-09-09T17:02:11.995252001Z level=info msg="Migration successfully executed" id="permission kind migration" duration=7.916911ms grafana | logger=migrator t=2024-09-09T17:02:11.999218151Z level=info msg="Executing migration" id="permission attribute migration" grafana | logger=migrator t=2024-09-09T17:02:12.007892724Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=8.678373ms grafana | logger=migrator t=2024-09-09T17:02:12.011348515Z level=info msg="Executing migration" id="permission identifier migration" grafana | logger=migrator t=2024-09-09T17:02:12.017245529Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=5.897304ms grafana | logger=migrator t=2024-09-09T17:02:12.023952068Z level=info msg="Executing migration" id="add permission identifier index" grafana | logger=migrator t=2024-09-09T17:02:12.025022567Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.070409ms grafana | logger=migrator t=2024-09-09T17:02:12.027945399Z level=info msg="Executing migration" id="add permission action scope role_id index" grafana | logger=migrator t=2024-09-09T17:02:12.029002358Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.05736ms grafana | logger=migrator t=2024-09-09T17:02:12.032015691Z level=info msg="Executing migration" id="remove permission role_id action scope index" grafana | logger=migrator t=2024-09-09T17:02:12.033041559Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.025508ms grafana | logger=migrator t=2024-09-09T17:02:12.036762614Z level=info msg="Executing migration" id="create query_history table v1" grafana | logger=migrator t=2024-09-09T17:02:12.03762488Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=862.446µs grafana | logger=migrator t=2024-09-09T17:02:12.04043023Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" grafana | logger=migrator t=2024-09-09T17:02:12.041487658Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.057188ms grafana | logger=migrator t=2024-09-09T17:02:12.045465258Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" grafana | logger=migrator t=2024-09-09T17:02:12.045529629Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=65.001µs grafana | logger=migrator t=2024-09-09T17:02:12.048299958Z level=info msg="Executing migration" id="create query_history_details table v1" grafana | logger=migrator t=2024-09-09T17:02:12.049106463Z level=info msg="Migration successfully executed" id="create query_history_details table v1" duration=806.145µs grafana | logger=migrator t=2024-09-09T17:02:12.051877482Z level=info msg="Executing migration" id="rbac disabled migrator" grafana | logger=migrator t=2024-09-09T17:02:12.051913382Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=36.81µs grafana | logger=migrator t=2024-09-09T17:02:12.054339005Z level=info msg="Executing migration" id="teams permissions migration" grafana | logger=migrator t=2024-09-09T17:02:12.054804013Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=465.238µs grafana | logger=migrator t=2024-09-09T17:02:12.060401462Z level=info msg="Executing migration" id="dashboard permissions" grafana | logger=migrator t=2024-09-09T17:02:12.061167606Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=766.314µs grafana | logger=migrator t=2024-09-09T17:02:12.06535737Z level=info msg="Executing migration" id="dashboard permissions uid scopes" grafana | logger=migrator t=2024-09-09T17:02:12.066314457Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=957.097µs grafana | logger=migrator t=2024-09-09T17:02:12.069596484Z level=info msg="Executing migration" id="drop managed folder create actions" grafana | logger=migrator t=2024-09-09T17:02:12.069797368Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=200.964µs grafana | logger=migrator t=2024-09-09T17:02:12.07274333Z level=info msg="Executing migration" id="alerting notification permissions" grafana | logger=migrator t=2024-09-09T17:02:12.073196788Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=453.358µs grafana | logger=migrator t=2024-09-09T17:02:12.077061807Z level=info msg="Executing migration" id="create query_history_star table v1" grafana | logger=migrator t=2024-09-09T17:02:12.077888771Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=826.664µs grafana | logger=migrator t=2024-09-09T17:02:12.122694213Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" grafana | logger=migrator t=2024-09-09T17:02:12.124376643Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.6823ms grafana | logger=migrator t=2024-09-09T17:02:12.129641936Z level=info msg="Executing migration" id="add column org_id in query_history_star" grafana | logger=migrator t=2024-09-09T17:02:12.138983491Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=9.341995ms grafana | logger=migrator t=2024-09-09T17:02:12.142755967Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" grafana | logger=migrator t=2024-09-09T17:02:12.142822449Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=64.381µs grafana | logger=migrator t=2024-09-09T17:02:12.146673607Z level=info msg="Executing migration" id="create correlation table v1" grafana | logger=migrator t=2024-09-09T17:02:12.147744086Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.069389ms grafana | logger=migrator t=2024-09-09T17:02:12.15306539Z level=info msg="Executing migration" id="add index correlations.uid" grafana | logger=migrator t=2024-09-09T17:02:12.154166459Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.100889ms grafana | logger=migrator t=2024-09-09T17:02:12.161115182Z level=info msg="Executing migration" id="add index correlations.source_uid" grafana | logger=migrator t=2024-09-09T17:02:12.162796761Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.681039ms grafana | logger=migrator t=2024-09-09T17:02:12.167098378Z level=info msg="Executing migration" id="add correlation config column" grafana | logger=migrator t=2024-09-09T17:02:12.176119417Z level=info msg="Migration successfully executed" id="add correlation config column" duration=9.021799ms grafana | logger=migrator t=2024-09-09T17:02:12.179985435Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" grafana | logger=migrator t=2024-09-09T17:02:12.180818111Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=831.715µs grafana | logger=migrator t=2024-09-09T17:02:12.184557586Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" grafana | logger=migrator t=2024-09-09T17:02:12.185282219Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=724.293µs grafana | logger=migrator t=2024-09-09T17:02:12.188279502Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" grafana | logger=migrator t=2024-09-09T17:02:12.211649935Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=23.370173ms grafana | logger=migrator t=2024-09-09T17:02:12.217028831Z level=info msg="Executing migration" id="create correlation v2" grafana | logger=migrator t=2024-09-09T17:02:12.217889595Z level=info msg="Migration successfully executed" id="create correlation v2" duration=859.414µs grafana | logger=migrator t=2024-09-09T17:02:12.221400107Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" grafana | logger=migrator t=2024-09-09T17:02:12.222422836Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.022559ms grafana | logger=migrator t=2024-09-09T17:02:12.225352927Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" grafana | logger=migrator t=2024-09-09T17:02:12.226388475Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.035588ms grafana | logger=migrator t=2024-09-09T17:02:12.230470427Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" grafana | logger=migrator t=2024-09-09T17:02:12.231961954Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.489007ms grafana | logger=migrator t=2024-09-09T17:02:12.241673985Z level=info msg="Executing migration" id="copy correlation v1 to v2" grafana | logger=migrator t=2024-09-09T17:02:12.24191036Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=235.675µs grafana | logger=migrator t=2024-09-09T17:02:12.245110837Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" grafana | logger=migrator t=2024-09-09T17:02:12.246240306Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=1.129139ms grafana | logger=migrator t=2024-09-09T17:02:12.250996201Z level=info msg="Executing migration" id="add provisioning column" grafana | logger=migrator t=2024-09-09T17:02:12.257703729Z level=info msg="Migration successfully executed" id="add provisioning column" duration=6.712838ms grafana | logger=migrator t=2024-09-09T17:02:12.260765073Z level=info msg="Executing migration" id="create entity_events table" grafana | logger=migrator t=2024-09-09T17:02:12.261452036Z level=info msg="Migration successfully executed" id="create entity_events table" duration=686.603µs grafana | logger=migrator t=2024-09-09T17:02:12.265504227Z level=info msg="Executing migration" id="create dashboard public config v1" grafana | logger=migrator t=2024-09-09T17:02:12.266306981Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=799.664µs grafana | logger=migrator t=2024-09-09T17:02:12.26964679Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2024-09-09T17:02:12.270644118Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2024-09-09T17:02:12.275222258Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2024-09-09T17:02:12.275977632Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2024-09-09T17:02:12.280266798Z level=info msg="Executing migration" id="Drop old dashboard public config table" grafana | logger=migrator t=2024-09-09T17:02:12.281117343Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=854.476µs grafana | logger=migrator t=2024-09-09T17:02:12.288279879Z level=info msg="Executing migration" id="recreate dashboard public config v1" grafana | logger=migrator t=2024-09-09T17:02:12.290104432Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.824693ms grafana | logger=migrator t=2024-09-09T17:02:12.294509619Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2024-09-09T17:02:12.295624689Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.11578ms grafana | logger=migrator t=2024-09-09T17:02:12.299457547Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2024-09-09T17:02:12.300531226Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.073359ms grafana | logger=migrator t=2024-09-09T17:02:12.303613171Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" grafana | logger=migrator t=2024-09-09T17:02:12.304651339Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.038067ms grafana | logger=migrator t=2024-09-09T17:02:12.307604791Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" grafana | logger=migrator t=2024-09-09T17:02:12.308629669Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.022587ms grafana | logger=migrator t=2024-09-09T17:02:12.312521158Z level=info msg="Executing migration" id="Drop public config table" grafana | logger=migrator t=2024-09-09T17:02:12.313265311Z level=info msg="Migration successfully executed" id="Drop public config table" duration=744.053µs grafana | logger=migrator t=2024-09-09T17:02:12.316277554Z level=info msg="Executing migration" id="Recreate dashboard public config v2" grafana | logger=migrator t=2024-09-09T17:02:12.317423535Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.143521ms grafana | logger=migrator t=2024-09-09T17:02:12.324920317Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" grafana | logger=migrator t=2024-09-09T17:02:12.326928013Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=2.007575ms grafana | logger=migrator t=2024-09-09T17:02:12.334446175Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" grafana | logger=migrator t=2024-09-09T17:02:12.335525174Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.078189ms grafana | logger=migrator t=2024-09-09T17:02:12.339713088Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" grafana | logger=migrator t=2024-09-09T17:02:12.341417568Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.70373ms grafana | logger=migrator t=2024-09-09T17:02:12.345014862Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" grafana | logger=migrator t=2024-09-09T17:02:12.369676028Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=24.662396ms grafana | logger=migrator t=2024-09-09T17:02:12.374653936Z level=info msg="Executing migration" id="add annotations_enabled column" grafana | logger=migrator t=2024-09-09T17:02:12.380708973Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=6.053977ms grafana | logger=migrator t=2024-09-09T17:02:12.383713346Z level=info msg="Executing migration" id="add time_selection_enabled column" grafana | logger=migrator t=2024-09-09T17:02:12.39245722Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=8.740934ms grafana | logger=migrator t=2024-09-09T17:02:12.396683375Z level=info msg="Executing migration" id="delete orphaned public dashboards" grafana | logger=migrator t=2024-09-09T17:02:12.396920159Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=237.204µs grafana | logger=migrator t=2024-09-09T17:02:12.404109286Z level=info msg="Executing migration" id="add share column" grafana | logger=migrator t=2024-09-09T17:02:12.415166031Z level=info msg="Migration successfully executed" id="add share column" duration=11.058055ms grafana | logger=migrator t=2024-09-09T17:02:12.418074093Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" grafana | logger=migrator t=2024-09-09T17:02:12.418200396Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=126.343µs grafana | logger=migrator t=2024-09-09T17:02:12.422276197Z level=info msg="Executing migration" id="create file table" grafana | logger=migrator t=2024-09-09T17:02:12.423196403Z level=info msg="Migration successfully executed" id="create file table" duration=919.656µs grafana | logger=migrator t=2024-09-09T17:02:12.42750476Z level=info msg="Executing migration" id="file table idx: path natural pk" grafana | logger=migrator t=2024-09-09T17:02:12.42868944Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.18443ms grafana | logger=migrator t=2024-09-09T17:02:12.432628441Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" grafana | logger=migrator t=2024-09-09T17:02:12.433685119Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.056558ms grafana | logger=migrator t=2024-09-09T17:02:12.483468349Z level=info msg="Executing migration" id="create file_meta table" grafana | logger=migrator t=2024-09-09T17:02:12.484714931Z level=info msg="Migration successfully executed" id="create file_meta table" duration=1.246312ms grafana | logger=migrator t=2024-09-09T17:02:12.489990044Z level=info msg="Executing migration" id="file table idx: path key" grafana | logger=migrator t=2024-09-09T17:02:12.491716475Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.725611ms grafana | logger=migrator t=2024-09-09T17:02:12.495472651Z level=info msg="Executing migration" id="set path collation in file table" grafana | logger=migrator t=2024-09-09T17:02:12.495574602Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=103.081µs grafana | logger=migrator t=2024-09-09T17:02:12.500344347Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" grafana | logger=migrator t=2024-09-09T17:02:12.500411578Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=67.791µs grafana | logger=migrator t=2024-09-09T17:02:12.502691729Z level=info msg="Executing migration" id="managed permissions migration" grafana | logger=migrator t=2024-09-09T17:02:12.503237148Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=545.449µs grafana | logger=migrator t=2024-09-09T17:02:12.5072948Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" grafana | logger=migrator t=2024-09-09T17:02:12.507621395Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=326.636µs grafana | logger=migrator t=2024-09-09T17:02:12.51123754Z level=info msg="Executing migration" id="RBAC action name migrator" grafana | logger=migrator t=2024-09-09T17:02:12.513240945Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=2.004215ms grafana | logger=migrator t=2024-09-09T17:02:12.517785815Z level=info msg="Executing migration" id="Add UID column to playlist" grafana | logger=migrator t=2024-09-09T17:02:12.526767134Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=8.982739ms grafana | logger=migrator t=2024-09-09T17:02:12.52992913Z level=info msg="Executing migration" id="Update uid column values in playlist" grafana | logger=migrator t=2024-09-09T17:02:12.530086192Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=157.392µs grafana | logger=migrator t=2024-09-09T17:02:12.533095176Z level=info msg="Executing migration" id="Add index for uid in playlist" grafana | logger=migrator t=2024-09-09T17:02:12.534176035Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.080289ms grafana | logger=migrator t=2024-09-09T17:02:12.537493903Z level=info msg="Executing migration" id="update group index for alert rules" grafana | logger=migrator t=2024-09-09T17:02:12.53789358Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=400.167µs grafana | logger=migrator t=2024-09-09T17:02:12.542151106Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" grafana | logger=migrator t=2024-09-09T17:02:12.542472792Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=321.376µs grafana | logger=migrator t=2024-09-09T17:02:12.545866062Z level=info msg="Executing migration" id="admin only folder/dashboard permission" grafana | logger=migrator t=2024-09-09T17:02:12.546635575Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=769.393µs grafana | logger=migrator t=2024-09-09T17:02:12.553068609Z level=info msg="Executing migration" id="add action column to seed_assignment" grafana | logger=migrator t=2024-09-09T17:02:12.563519773Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=10.451734ms grafana | logger=migrator t=2024-09-09T17:02:12.56670848Z level=info msg="Executing migration" id="add scope column to seed_assignment" grafana | logger=migrator t=2024-09-09T17:02:12.575367453Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=8.657933ms grafana | logger=migrator t=2024-09-09T17:02:12.579834372Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" grafana | logger=migrator t=2024-09-09T17:02:12.580666127Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=832.525µs grafana | logger=migrator t=2024-09-09T17:02:12.58543327Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" grafana | logger=migrator t=2024-09-09T17:02:12.659012901Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=73.581611ms grafana | logger=migrator t=2024-09-09T17:02:12.6634704Z level=info msg="Executing migration" id="add unique index builtin_role_name back" grafana | logger=migrator t=2024-09-09T17:02:12.664681211Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=1.209991ms grafana | logger=migrator t=2024-09-09T17:02:12.668740283Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" grafana | logger=migrator t=2024-09-09T17:02:12.669979525Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.238552ms grafana | logger=migrator t=2024-09-09T17:02:12.673223733Z level=info msg="Executing migration" id="add primary key to seed_assigment" grafana | logger=migrator t=2024-09-09T17:02:12.699768082Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=26.544098ms grafana | logger=migrator t=2024-09-09T17:02:12.704568436Z level=info msg="Executing migration" id="add origin column to seed_assignment" grafana | logger=migrator t=2024-09-09T17:02:12.711083871Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=6.515075ms grafana | logger=migrator t=2024-09-09T17:02:12.714162386Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" grafana | logger=migrator t=2024-09-09T17:02:12.714490101Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=325.625µs grafana | logger=migrator t=2024-09-09T17:02:12.7194883Z level=info msg="Executing migration" id="prevent seeding OnCall access" grafana | logger=migrator t=2024-09-09T17:02:12.719717004Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=256.435µs grafana | logger=migrator t=2024-09-09T17:02:12.727444921Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" grafana | logger=migrator t=2024-09-09T17:02:12.727789316Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=342.036µs grafana | logger=migrator t=2024-09-09T17:02:12.73137487Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" grafana | logger=migrator t=2024-09-09T17:02:12.731565133Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=190.183µs grafana | logger=migrator t=2024-09-09T17:02:12.737272914Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" grafana | logger=migrator t=2024-09-09T17:02:12.737538029Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=265.035µs grafana | logger=migrator t=2024-09-09T17:02:12.741023101Z level=info msg="Executing migration" id="create folder table" grafana | logger=migrator t=2024-09-09T17:02:12.74270822Z level=info msg="Migration successfully executed" id="create folder table" duration=1.68712ms grafana | logger=migrator t=2024-09-09T17:02:12.746793343Z level=info msg="Executing migration" id="Add index for parent_uid" grafana | logger=migrator t=2024-09-09T17:02:12.747999184Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.205561ms grafana | logger=migrator t=2024-09-09T17:02:12.755435535Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" grafana | logger=migrator t=2024-09-09T17:02:12.756546504Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.113569ms grafana | logger=migrator t=2024-09-09T17:02:12.762144944Z level=info msg="Executing migration" id="Update folder title length" grafana | logger=migrator t=2024-09-09T17:02:12.762183584Z level=info msg="Migration successfully executed" id="Update folder title length" duration=39.7µs grafana | logger=migrator t=2024-09-09T17:02:12.765587225Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" grafana | logger=migrator t=2024-09-09T17:02:12.767439707Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.851872ms grafana | logger=migrator t=2024-09-09T17:02:12.771584341Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" grafana | logger=migrator t=2024-09-09T17:02:12.773167238Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.581777ms grafana | logger=migrator t=2024-09-09T17:02:12.776016459Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" grafana | logger=migrator t=2024-09-09T17:02:12.777143659Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.12654ms grafana | logger=migrator t=2024-09-09T17:02:12.780952906Z level=info msg="Executing migration" id="Sync dashboard and folder table" grafana | logger=migrator t=2024-09-09T17:02:12.781365123Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=411.937µs grafana | logger=migrator t=2024-09-09T17:02:12.78686213Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" grafana | logger=migrator t=2024-09-09T17:02:12.787154415Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=292.555µs grafana | logger=migrator t=2024-09-09T17:02:12.823465617Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" grafana | logger=migrator t=2024-09-09T17:02:12.824535857Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=1.07037ms grafana | logger=migrator t=2024-09-09T17:02:12.829388072Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" grafana | logger=migrator t=2024-09-09T17:02:12.830499122Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=1.1084ms grafana | logger=migrator t=2024-09-09T17:02:12.834340619Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" grafana | logger=migrator t=2024-09-09T17:02:12.835376667Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=1.035608ms grafana | logger=migrator t=2024-09-09T17:02:12.839575912Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" grafana | logger=migrator t=2024-09-09T17:02:12.840708812Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=1.13163ms grafana | logger=migrator t=2024-09-09T17:02:12.845315464Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" grafana | logger=migrator t=2024-09-09T17:02:12.846358362Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.042368ms grafana | logger=migrator t=2024-09-09T17:02:12.852189484Z level=info msg="Executing migration" id="create anon_device table" grafana | logger=migrator t=2024-09-09T17:02:12.854320802Z level=info msg="Migration successfully executed" id="create anon_device table" duration=2.131478ms grafana | logger=migrator t=2024-09-09T17:02:12.861057082Z level=info msg="Executing migration" id="add unique index anon_device.device_id" grafana | logger=migrator t=2024-09-09T17:02:12.862311994Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.254522ms grafana | logger=migrator t=2024-09-09T17:02:12.865364598Z level=info msg="Executing migration" id="add index anon_device.updated_at" grafana | logger=migrator t=2024-09-09T17:02:12.866807704Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.445996ms grafana | logger=migrator t=2024-09-09T17:02:12.872485534Z level=info msg="Executing migration" id="create signing_key table" grafana | logger=migrator t=2024-09-09T17:02:12.873459521Z level=info msg="Migration successfully executed" id="create signing_key table" duration=973.347µs grafana | logger=migrator t=2024-09-09T17:02:12.879364235Z level=info msg="Executing migration" id="add unique index signing_key.key_id" grafana | logger=migrator t=2024-09-09T17:02:12.880648158Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.282963ms grafana | logger=migrator t=2024-09-09T17:02:12.885763018Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" grafana | logger=migrator t=2024-09-09T17:02:12.887434708Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.6719ms grafana | logger=migrator t=2024-09-09T17:02:12.898793908Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" grafana | logger=migrator t=2024-09-09T17:02:12.899599702Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=809.634µs grafana | logger=migrator t=2024-09-09T17:02:12.904688423Z level=info msg="Executing migration" id="Add folder_uid for dashboard" grafana | logger=migrator t=2024-09-09T17:02:12.91641639Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=11.728577ms grafana | logger=migrator t=2024-09-09T17:02:12.925260957Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" grafana | logger=migrator t=2024-09-09T17:02:12.926245654Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=985.717µs grafana | logger=migrator t=2024-09-09T17:02:12.934781525Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" grafana | logger=migrator t=2024-09-09T17:02:12.934826416Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=49.821µs grafana | logger=migrator t=2024-09-09T17:02:12.938096113Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" grafana | logger=migrator t=2024-09-09T17:02:12.939475968Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=1.379535ms grafana | logger=migrator t=2024-09-09T17:02:12.943438638Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" grafana | logger=migrator t=2024-09-09T17:02:12.943462518Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=19.131µs grafana | logger=migrator t=2024-09-09T17:02:12.946200856Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" grafana | logger=migrator t=2024-09-09T17:02:12.94755828Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.354454ms grafana | logger=migrator t=2024-09-09T17:02:12.950682596Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" grafana | logger=migrator t=2024-09-09T17:02:12.951856597Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=1.17739ms grafana | logger=migrator t=2024-09-09T17:02:12.956902166Z level=info msg="Executing migration" id="create sso_setting table" grafana | logger=migrator t=2024-09-09T17:02:12.957974394Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.072028ms grafana | logger=migrator t=2024-09-09T17:02:12.96114968Z level=info msg="Executing migration" id="copy kvstore migration status to each org" grafana | logger=migrator t=2024-09-09T17:02:12.962297461Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=1.148831ms grafana | logger=migrator t=2024-09-09T17:02:12.966139699Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" grafana | logger=migrator t=2024-09-09T17:02:12.966408053Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=269.074µs grafana | logger=migrator t=2024-09-09T17:02:12.970956804Z level=info msg="Executing migration" id="managed dashboard permissions annotation actions migration" grafana | logger=migrator t=2024-09-09T17:02:12.97188273Z level=info msg="Migration successfully executed" id="managed dashboard permissions annotation actions migration" duration=924.926µs grafana | logger=migrator t=2024-09-09T17:02:12.976834618Z level=info msg="Executing migration" id="create cloud_migration table v1" grafana | logger=migrator t=2024-09-09T17:02:12.978306303Z level=info msg="Migration successfully executed" id="create cloud_migration table v1" duration=1.471325ms grafana | logger=migrator t=2024-09-09T17:02:12.98148516Z level=info msg="Executing migration" id="create cloud_migration_run table v1" grafana | logger=migrator t=2024-09-09T17:02:12.982394326Z level=info msg="Migration successfully executed" id="create cloud_migration_run table v1" duration=908.626µs grafana | logger=migrator t=2024-09-09T17:02:12.988016755Z level=info msg="Executing migration" id="add stack_id column" grafana | logger=migrator t=2024-09-09T17:02:12.999995467Z level=info msg="Migration successfully executed" id="add stack_id column" duration=11.976782ms grafana | logger=migrator t=2024-09-09T17:02:13.008167131Z level=info msg="Executing migration" id="add region_slug column" grafana | logger=migrator t=2024-09-09T17:02:13.017470996Z level=info msg="Migration successfully executed" id="add region_slug column" duration=9.303455ms grafana | logger=migrator t=2024-09-09T17:02:13.020944138Z level=info msg="Executing migration" id="add cluster_slug column" grafana | logger=migrator t=2024-09-09T17:02:13.028777325Z level=info msg="Migration successfully executed" id="add cluster_slug column" duration=7.831907ms grafana | logger=migrator t=2024-09-09T17:02:13.032648734Z level=info msg="Executing migration" id="add migration uid column" grafana | logger=migrator t=2024-09-09T17:02:13.04263088Z level=info msg="Migration successfully executed" id="add migration uid column" duration=9.981716ms grafana | logger=migrator t=2024-09-09T17:02:13.053209557Z level=info msg="Executing migration" id="Update uid column values for migration" grafana | logger=migrator t=2024-09-09T17:02:13.053531203Z level=info msg="Migration successfully executed" id="Update uid column values for migration" duration=321.636µs grafana | logger=migrator t=2024-09-09T17:02:13.057010355Z level=info msg="Executing migration" id="Add unique index migration_uid" grafana | logger=migrator t=2024-09-09T17:02:13.059095881Z level=info msg="Migration successfully executed" id="Add unique index migration_uid" duration=2.085056ms grafana | logger=migrator t=2024-09-09T17:02:13.06352699Z level=info msg="Executing migration" id="add migration run uid column" grafana | logger=migrator t=2024-09-09T17:02:13.07320984Z level=info msg="Migration successfully executed" id="add migration run uid column" duration=9.68223ms grafana | logger=migrator t=2024-09-09T17:02:13.076290965Z level=info msg="Executing migration" id="Update uid column values for migration run" grafana | logger=migrator t=2024-09-09T17:02:13.076471438Z level=info msg="Migration successfully executed" id="Update uid column values for migration run" duration=180.943µs grafana | logger=migrator t=2024-09-09T17:02:13.078781629Z level=info msg="Executing migration" id="Add unique index migration_run_uid" grafana | logger=migrator t=2024-09-09T17:02:13.07994035Z level=info msg="Migration successfully executed" id="Add unique index migration_run_uid" duration=1.158361ms grafana | logger=migrator t=2024-09-09T17:02:13.083116036Z level=info msg="Executing migration" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" grafana | logger=migrator t=2024-09-09T17:02:13.108718178Z level=info msg="Migration successfully executed" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" duration=25.600292ms grafana | logger=migrator t=2024-09-09T17:02:13.146818242Z level=info msg="Executing migration" id="create cloud_migration_session v2" grafana | logger=migrator t=2024-09-09T17:02:13.148505671Z level=info msg="Migration successfully executed" id="create cloud_migration_session v2" duration=1.688619ms grafana | logger=migrator t=2024-09-09T17:02:13.153046471Z level=info msg="Executing migration" id="create index UQE_cloud_migration_session_uid - v2" grafana | logger=migrator t=2024-09-09T17:02:13.155115428Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_session_uid - v2" duration=2.068487ms grafana | logger=migrator t=2024-09-09T17:02:13.158520888Z level=info msg="Executing migration" id="copy cloud_migration_session v1 to v2" grafana | logger=migrator t=2024-09-09T17:02:13.158942106Z level=info msg="Migration successfully executed" id="copy cloud_migration_session v1 to v2" duration=418.268µs grafana | logger=migrator t=2024-09-09T17:02:13.165022813Z level=info msg="Executing migration" id="drop cloud_migration_session_tmp_qwerty" grafana | logger=migrator t=2024-09-09T17:02:13.166732793Z level=info msg="Migration successfully executed" id="drop cloud_migration_session_tmp_qwerty" duration=1.71484ms grafana | logger=migrator t=2024-09-09T17:02:13.16995824Z level=info msg="Executing migration" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" grafana | logger=migrator t=2024-09-09T17:02:13.197054349Z level=info msg="Migration successfully executed" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" duration=27.096179ms grafana | logger=migrator t=2024-09-09T17:02:13.200053102Z level=info msg="Executing migration" id="create cloud_migration_snapshot v2" grafana | logger=migrator t=2024-09-09T17:02:13.200750954Z level=info msg="Migration successfully executed" id="create cloud_migration_snapshot v2" duration=698.942µs grafana | logger=migrator t=2024-09-09T17:02:13.204727574Z level=info msg="Executing migration" id="create index UQE_cloud_migration_snapshot_uid - v2" grafana | logger=migrator t=2024-09-09T17:02:13.20558421Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_snapshot_uid - v2" duration=856.686µs grafana | logger=migrator t=2024-09-09T17:02:13.208921769Z level=info msg="Executing migration" id="copy cloud_migration_snapshot v1 to v2" grafana | logger=migrator t=2024-09-09T17:02:13.209481859Z level=info msg="Migration successfully executed" id="copy cloud_migration_snapshot v1 to v2" duration=559.47µs grafana | logger=migrator t=2024-09-09T17:02:13.216534503Z level=info msg="Executing migration" id="drop cloud_migration_snapshot_tmp_qwerty" grafana | logger=migrator t=2024-09-09T17:02:13.218025329Z level=info msg="Migration successfully executed" id="drop cloud_migration_snapshot_tmp_qwerty" duration=1.489716ms grafana | logger=migrator t=2024-09-09T17:02:13.22430941Z level=info msg="Executing migration" id="add snapshot upload_url column" grafana | logger=migrator t=2024-09-09T17:02:13.236440635Z level=info msg="Migration successfully executed" id="add snapshot upload_url column" duration=12.130475ms grafana | logger=migrator t=2024-09-09T17:02:13.240341744Z level=info msg="Executing migration" id="add snapshot status column" grafana | logger=migrator t=2024-09-09T17:02:13.248098811Z level=info msg="Migration successfully executed" id="add snapshot status column" duration=7.756507ms grafana | logger=migrator t=2024-09-09T17:02:13.25483485Z level=info msg="Executing migration" id="add snapshot local_directory column" grafana | logger=migrator t=2024-09-09T17:02:13.264924648Z level=info msg="Migration successfully executed" id="add snapshot local_directory column" duration=10.089058ms grafana | logger=migrator t=2024-09-09T17:02:13.268200806Z level=info msg="Executing migration" id="add snapshot gms_snapshot_uid column" grafana | logger=migrator t=2024-09-09T17:02:13.278148321Z level=info msg="Migration successfully executed" id="add snapshot gms_snapshot_uid column" duration=9.946775ms grafana | logger=migrator t=2024-09-09T17:02:13.282357566Z level=info msg="Executing migration" id="add snapshot encryption_key column" grafana | logger=migrator t=2024-09-09T17:02:13.291907554Z level=info msg="Migration successfully executed" id="add snapshot encryption_key column" duration=9.549508ms grafana | logger=migrator t=2024-09-09T17:02:13.29559739Z level=info msg="Executing migration" id="add snapshot error_string column" grafana | logger=migrator t=2024-09-09T17:02:13.305684858Z level=info msg="Migration successfully executed" id="add snapshot error_string column" duration=10.086508ms grafana | logger=migrator t=2024-09-09T17:02:13.30970935Z level=info msg="Executing migration" id="create cloud_migration_resource table v1" grafana | logger=migrator t=2024-09-09T17:02:13.310376921Z level=info msg="Migration successfully executed" id="create cloud_migration_resource table v1" duration=667.201µs grafana | logger=migrator t=2024-09-09T17:02:13.313578247Z level=info msg="Executing migration" id="delete cloud_migration_snapshot.result column" grafana | logger=migrator t=2024-09-09T17:02:13.34878368Z level=info msg="Migration successfully executed" id="delete cloud_migration_snapshot.result column" duration=35.204663ms grafana | logger=migrator t=2024-09-09T17:02:13.351777833Z level=info msg="Executing migration" id="alter kv_store.value to longtext" grafana | logger=migrator t=2024-09-09T17:02:13.351830404Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=52.931µs grafana | logger=migrator t=2024-09-09T17:02:13.357135598Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" grafana | logger=migrator t=2024-09-09T17:02:13.370751619Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=13.616901ms grafana | logger=migrator t=2024-09-09T17:02:13.376041981Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" grafana | logger=migrator t=2024-09-09T17:02:13.383830479Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=7.787738ms grafana | logger=migrator t=2024-09-09T17:02:13.387703648Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" grafana | logger=migrator t=2024-09-09T17:02:13.388101145Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=394.587µs grafana | logger=migrator t=2024-09-09T17:02:13.393395038Z level=info msg="Executing migration" id="managed folder permissions alerting silences actions migration" grafana | logger=migrator t=2024-09-09T17:02:13.393619252Z level=info msg="Migration successfully executed" id="managed folder permissions alerting silences actions migration" duration=224.084µs grafana | logger=migrator t=2024-09-09T17:02:13.396962281Z level=info msg="Executing migration" id="add record column to alert_rule table" grafana | logger=migrator t=2024-09-09T17:02:13.410698324Z level=info msg="Migration successfully executed" id="add record column to alert_rule table" duration=13.736163ms grafana | logger=migrator t=2024-09-09T17:02:13.415893406Z level=info msg="Executing migration" id="add record column to alert_rule_version table" grafana | logger=migrator t=2024-09-09T17:02:13.423621542Z level=info msg="Migration successfully executed" id="add record column to alert_rule_version table" duration=7.728015ms grafana | logger=migrator t=2024-09-09T17:02:13.429672359Z level=info msg="Executing migration" id="add resolved_at column to alert_instance table" grafana | logger=migrator t=2024-09-09T17:02:13.438946593Z level=info msg="Migration successfully executed" id="add resolved_at column to alert_instance table" duration=9.273524ms grafana | logger=migrator t=2024-09-09T17:02:13.45292477Z level=info msg="Executing migration" id="add last_sent_at column to alert_instance table" grafana | logger=migrator t=2024-09-09T17:02:13.46538271Z level=info msg="Migration successfully executed" id="add last_sent_at column to alert_instance table" duration=12.45589ms grafana | logger=migrator t=2024-09-09T17:02:13.468667897Z level=info msg="Executing migration" id="Enable traceQL streaming for all Tempo datasources" grafana | logger=migrator t=2024-09-09T17:02:13.468689849Z level=info msg="Migration successfully executed" id="Enable traceQL streaming for all Tempo datasources" duration=22.252µs grafana | logger=migrator t=2024-09-09T17:02:13.476901603Z level=info msg="migrations completed" performed=594 skipped=0 duration=5.011912658s grafana | logger=migrator t=2024-09-09T17:02:13.477875891Z level=info msg="Unlocking database" grafana | logger=sqlstore t=2024-09-09T17:02:13.499609935Z level=info msg="Created default admin" user=admin grafana | logger=sqlstore t=2024-09-09T17:02:13.499844089Z level=info msg="Created default organization" grafana | logger=secrets t=2024-09-09T17:02:13.504408139Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 grafana | logger=plugin.angulardetectorsprovider.dynamic t=2024-09-09T17:02:13.553781972Z level=info msg="Restored cache from database" duration=518.149µs grafana | logger=plugin.store t=2024-09-09T17:02:13.555316499Z level=info msg="Loading plugins..." grafana | logger=plugins.registration t=2024-09-09T17:02:13.585834788Z level=error msg="Could not register plugin" pluginId=xychart error="plugin xychart is already registered" grafana | logger=plugins.initialization t=2024-09-09T17:02:13.585857688Z level=error msg="Could not initialize plugin" pluginId=xychart error="plugin xychart is already registered" grafana | logger=local.finder t=2024-09-09T17:02:13.58593906Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled grafana | logger=plugin.store t=2024-09-09T17:02:13.58595605Z level=info msg="Plugins loaded" count=54 duration=30.640401ms grafana | logger=query_data t=2024-09-09T17:02:13.590270387Z level=info msg="Query Service initialization" grafana | logger=live.push_http t=2024-09-09T17:02:13.594058823Z level=info msg="Live Push Gateway initialization" grafana | logger=ngalert.notifier.alertmanager org=1 t=2024-09-09T17:02:13.602590164Z level=info msg="Applying new configuration to Alertmanager" configHash=d2c56faca6af2a5772ff4253222f7386 grafana | logger=ngalert.state.manager t=2024-09-09T17:02:13.611063734Z level=info msg="Running in alternative execution of Error/NoData mode" grafana | logger=infra.usagestats.collector t=2024-09-09T17:02:13.613225492Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 grafana | logger=provisioning.datasources t=2024-09-09T17:02:13.614847281Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz grafana | logger=provisioning.alerting t=2024-09-09T17:02:13.638088902Z level=info msg="starting to provision alerting" grafana | logger=provisioning.alerting t=2024-09-09T17:02:13.638111182Z level=info msg="finished to provision alerting" grafana | logger=ngalert.state.manager t=2024-09-09T17:02:13.638289285Z level=info msg="Warming state cache for startup" grafana | logger=ngalert.multiorg.alertmanager t=2024-09-09T17:02:13.639835653Z level=info msg="Starting MultiOrg Alertmanager" grafana | logger=grafanaStorageLogger t=2024-09-09T17:02:13.641457241Z level=info msg="Storage starting" grafana | logger=http.server t=2024-09-09T17:02:13.642180433Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= grafana | logger=grafana.update.checker t=2024-09-09T17:02:13.713485564Z level=info msg="Update check succeeded" duration=72.232757ms grafana | logger=ngalert.state.manager t=2024-09-09T17:02:13.719600802Z level=info msg="State cache has been initialized" states=0 duration=81.309336ms grafana | logger=ngalert.scheduler t=2024-09-09T17:02:13.719651022Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1 grafana | logger=ticker t=2024-09-09T17:02:13.719828905Z level=info msg=starting first_tick=2024-09-09T17:02:20Z grafana | logger=plugins.update.checker t=2024-09-09T17:02:13.728998757Z level=info msg="Update check succeeded" duration=89.580412ms grafana | logger=provisioning.dashboard t=2024-09-09T17:02:13.750189112Z level=info msg="starting to provision dashboards" grafana | logger=sqlstore.transactions t=2024-09-09T17:02:13.822186225Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" grafana | logger=grafana-apiserver t=2024-09-09T17:02:13.823803633Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2024-09-09T17:02:13.824295111Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" grafana | logger=sqlstore.transactions t=2024-09-09T17:02:13.833286001Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked" grafana | logger=sqlstore.transactions t=2024-09-09T17:02:13.86664711Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" grafana | logger=plugin.angulardetectorsprovider.dynamic t=2024-09-09T17:02:13.871049388Z level=info msg="Patterns update finished" duration=78.692901ms grafana | logger=provisioning.dashboard t=2024-09-09T17:02:14.037757483Z level=info msg="finished to provision dashboards" grafana | logger=infra.usagestats t=2024-09-09T17:03:48.650063643Z level=info msg="Usage stats are ready to report" =================================== ======== Logs from kafka ======== kafka | ===> User kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) kafka | ===> Configuring ... kafka | Running in Zookeeper mode... kafka | ===> Running preflight checks ... kafka | ===> Check if /var/lib/kafka/data is writable ... kafka | ===> Check if Zookeeper is healthy ... kafka | [2024-09-09 17:02:06,654] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2024-09-09 17:02:06,655] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) kafka | [2024-09-09 17:02:06,655] INFO Client environment:java.version=17.0.12 (org.apache.zookeeper.ZooKeeper) kafka | [2024-09-09 17:02:06,655] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2024-09-09 17:02:06,655] INFO Client environment:java.home=/usr/lib/jvm/java-17-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2024-09-09 17:02:06,655] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/commons-validator-1.7.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/jackson-core-2.16.0.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.16.0.jar:/usr/share/java/cp-base-new/commons-io-2.11.0.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-databind-2.16.0.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.7.0-ccs.jar:/usr/share/java/cp-base-new/kafka-metadata-7.7.0-ccs.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.4.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.16.0.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.6-3.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/commons-beanutils-1.9.4.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.16.0.jar:/usr/share/java/cp-base-new/commons-digester-2.1.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.16.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/jackson-annotations-2.16.0.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.7.0.jar:/usr/share/java/cp-base-new/zookeeper-3.8.4.jar:/usr/share/java/cp-base-new/utility-belt-7.7.0-130.jar:/usr/share/java/cp-base-new/common-utils-7.7.0.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/error_prone_annotations-2.10.0.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/checker-qual-3.19.0.jar:/usr/share/java/cp-base-new/kafka-raft-7.7.0-ccs.jar:/usr/share/java/cp-base-new/kafka-server-7.7.0-ccs.jar:/usr/share/java/cp-base-new/kafka-server-common-7.7.0-ccs.jar:/usr/share/java/cp-base-new/pcollections-4.0.1.jar:/usr/share/java/cp-base-new/commons-logging-1.2.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/commons-collections-3.2.2.jar:/usr/share/java/cp-base-new/caffeine-2.9.3.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/kafka_2.13-7.7.0-ccs.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/kafka-clients-7.7.0-ccs.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.7.0-ccs.jar:/usr/share/java/cp-base-new/kafka-storage-7.7.0-ccs.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.7.0-ccs.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar:/usr/share/java/cp-base-new/jose4j-0.9.5.jar (org.apache.zookeeper.ZooKeeper) kafka | [2024-09-09 17:02:06,655] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2024-09-09 17:02:06,655] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2024-09-09 17:02:06,655] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2024-09-09 17:02:06,655] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2024-09-09 17:02:06,655] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2024-09-09 17:02:06,655] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2024-09-09 17:02:06,656] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-09-09 17:02:06,656] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-09-09 17:02:06,656] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-09-09 17:02:06,656] INFO Client environment:os.memory.free=500MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-09-09 17:02:06,656] INFO Client environment:os.memory.max=8044MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-09-09 17:02:06,656] INFO Client environment:os.memory.total=512MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-09-09 17:02:06,658] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@43a25848 (org.apache.zookeeper.ZooKeeper) kafka | [2024-09-09 17:02:06,661] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2024-09-09 17:02:06,665] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2024-09-09 17:02:06,670] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2024-09-09 17:02:06,680] INFO Opening socket connection to server zookeeper/172.17.0.5:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2024-09-09 17:02:06,680] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) kafka | [2024-09-09 17:02:06,687] INFO Socket connection established, initiating session, client: /172.17.0.6:40930, server: zookeeper/172.17.0.5:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2024-09-09 17:02:06,725] INFO Session establishment complete on server zookeeper/172.17.0.5:2181, session id = 0x100000289e20000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) kafka | [2024-09-09 17:02:06,838] INFO Session: 0x100000289e20000 closed (org.apache.zookeeper.ZooKeeper) kafka | [2024-09-09 17:02:06,838] INFO EventThread shut down for session: 0x100000289e20000 (org.apache.zookeeper.ClientCnxn) kafka | Using log4j config /etc/kafka/log4j.properties kafka | ===> Launching ... kafka | ===> Launching kafka ... kafka | [2024-09-09 17:02:07,370] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) kafka | [2024-09-09 17:02:07,567] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2024-09-09 17:02:07,637] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) kafka | [2024-09-09 17:02:07,638] INFO starting (kafka.server.KafkaServer) kafka | [2024-09-09 17:02:07,638] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) kafka | [2024-09-09 17:02:07,649] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-09-09 17:02:07,653] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2024-09-09 17:02:07,653] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) kafka | [2024-09-09 17:02:07,653] INFO Client environment:java.version=17.0.12 (org.apache.zookeeper.ZooKeeper) kafka | [2024-09-09 17:02:07,653] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2024-09-09 17:02:07,653] INFO Client environment:java.home=/usr/lib/jvm/java-17-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2024-09-09 17:02:07,653] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/connect-transforms-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/protobuf-java-3.23.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-mirror-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/netty-common-4.1.110.Final.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.16.2.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.110.Final.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/connect-api-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.16.2.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.6-3.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.16.2.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.110.Final.jar:/usr/bin/../share/java/kafka/connect-runtime-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/kafka-shell-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.110.Final.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/scala-library-2.13.12.jar:/usr/bin/../share/java/kafka/trogdor-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.110.Final.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.110.Final.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.110.Final.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.110.Final.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.110.Final.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.12.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/kafka-raft-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-tools-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-server-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-databind-2.16.2.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/opentelemetry-proto-1.0.0-alpha.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.16.2.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.16.2.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-core-2.16.2.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/kafka-clients-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/connect-json-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) kafka | [2024-09-09 17:02:07,653] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2024-09-09 17:02:07,653] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2024-09-09 17:02:07,653] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2024-09-09 17:02:07,653] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2024-09-09 17:02:07,653] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2024-09-09 17:02:07,653] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2024-09-09 17:02:07,653] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-09-09 17:02:07,653] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-09-09 17:02:07,653] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-09-09 17:02:07,653] INFO Client environment:os.memory.free=986MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-09-09 17:02:07,653] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-09-09 17:02:07,653] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-09-09 17:02:07,655] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@560348e6 (org.apache.zookeeper.ZooKeeper) kafka | [2024-09-09 17:02:07,659] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2024-09-09 17:02:07,663] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2024-09-09 17:02:07,671] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-09-09 17:02:07,692] INFO Opening socket connection to server zookeeper/172.17.0.5:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2024-09-09 17:02:07,695] INFO Socket connection established, initiating session, client: /172.17.0.6:40932, server: zookeeper/172.17.0.5:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2024-09-09 17:02:07,707] INFO Session establishment complete on server zookeeper/172.17.0.5:2181, session id = 0x100000289e20001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) kafka | [2024-09-09 17:02:07,715] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-09-09 17:02:08,067] INFO Cluster ID = hj4zLbq6T_aCBjzSL5dtSA (kafka.server.KafkaServer) kafka | [2024-09-09 17:02:08,112] INFO KafkaConfig values: kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 kafka | alter.config.policy.class.name = null kafka | alter.log.dirs.replication.quota.window.num = 11 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 kafka | authorizer.class.name = kafka | auto.create.topics.enable = true kafka | auto.include.jmx.reporter = true kafka | auto.leader.rebalance.enable = true kafka | background.threads = 10 kafka | broker.heartbeat.interval.ms = 2000 kafka | broker.id = 1 kafka | broker.id.generation.enable = true kafka | broker.rack = null kafka | broker.session.timeout.ms = 9000 kafka | client.quota.callback.class = null kafka | compression.type = producer kafka | connection.failed.authentication.delay.ms = 100 kafka | connections.max.idle.ms = 600000 kafka | connections.max.reauth.ms = 0 kafka | control.plane.listener.name = null kafka | controlled.shutdown.enable = true kafka | controlled.shutdown.max.retries = 3 kafka | controlled.shutdown.retry.backoff.ms = 5000 kafka | controller.listener.names = null kafka | controller.quorum.append.linger.ms = 25 kafka | controller.quorum.election.backoff.max.ms = 1000 kafka | controller.quorum.election.timeout.ms = 1000 kafka | controller.quorum.fetch.timeout.ms = 2000 kafka | controller.quorum.request.timeout.ms = 2000 kafka | controller.quorum.retry.backoff.ms = 20 kafka | controller.quorum.voters = [] kafka | controller.quota.window.num = 11 kafka | controller.quota.window.size.seconds = 1 kafka | controller.socket.timeout.ms = 30000 kafka | create.topic.policy.class.name = null kafka | default.replication.factor = 1 kafka | delegation.token.expiry.check.interval.ms = 3600000 kafka | delegation.token.expiry.time.ms = 86400000 kafka | delegation.token.master.key = null kafka | delegation.token.max.lifetime.ms = 604800000 kafka | delegation.token.secret.key = null kafka | delete.records.purgatory.purge.interval.requests = 1 kafka | delete.topic.enable = true kafka | early.start.listeners = null kafka | eligible.leader.replicas.enable = false kafka | fetch.max.bytes = 57671680 kafka | fetch.purgatory.purge.interval.requests = 1000 kafka | group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.UniformAssignor, org.apache.kafka.coordinator.group.assignor.RangeAssignor] kafka | group.consumer.heartbeat.interval.ms = 5000 kafka | group.consumer.max.heartbeat.interval.ms = 15000 kafka | group.consumer.max.session.timeout.ms = 60000 kafka | group.consumer.max.size = 2147483647 kafka | group.consumer.min.heartbeat.interval.ms = 5000 kafka | group.consumer.min.session.timeout.ms = 45000 kafka | group.consumer.session.timeout.ms = 45000 kafka | group.coordinator.new.enable = false kafka | group.coordinator.rebalance.protocols = [classic] kafka | group.coordinator.threads = 1 kafka | group.initial.rebalance.delay.ms = 3000 kafka | group.max.session.timeout.ms = 1800000 kafka | group.max.size = 2147483647 kafka | group.min.session.timeout.ms = 6000 kafka | initial.broker.registration.timeout.ms = 60000 kafka | inter.broker.listener.name = PLAINTEXT kafka | inter.broker.protocol.version = 3.7-IV4 kafka | kafka.metrics.polling.interval.secs = 10 kafka | kafka.metrics.reporters = [] kafka | leader.imbalance.check.interval.seconds = 300 kafka | leader.imbalance.per.broker.percentage = 10 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 kafka | log.cleaner.backoff.ms = 15000 kafka | log.cleaner.dedupe.buffer.size = 134217728 kafka | log.cleaner.delete.retention.ms = 86400000 kafka | log.cleaner.enable = true kafka | log.cleaner.io.buffer.load.factor = 0.9 kafka | log.cleaner.io.buffer.size = 524288 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 kafka | log.cleaner.min.cleanable.ratio = 0.5 kafka | log.cleaner.min.compaction.lag.ms = 0 kafka | log.cleaner.threads = 1 kafka | log.cleanup.policy = [delete] kafka | log.dir = /tmp/kafka-logs kafka | log.dirs = /var/lib/kafka/data kafka | log.flush.interval.messages = 9223372036854775807 kafka | log.flush.interval.ms = null kafka | log.flush.offset.checkpoint.interval.ms = 60000 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 kafka | log.index.interval.bytes = 4096 kafka | log.index.size.max.bytes = 10485760 kafka | log.local.retention.bytes = -2 kafka | log.local.retention.ms = -2 kafka | log.message.downconversion.enable = true kafka | log.message.format.version = 3.0-IV1 kafka | log.message.timestamp.after.max.ms = 9223372036854775807 kafka | log.message.timestamp.before.max.ms = 9223372036854775807 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 kafka | log.message.timestamp.type = CreateTime kafka | log.preallocate = false kafka | log.retention.bytes = -1 kafka | log.retention.check.interval.ms = 300000 kafka | log.retention.hours = 168 kafka | log.retention.minutes = null kafka | log.retention.ms = null kafka | log.roll.hours = 168 kafka | log.roll.jitter.hours = 0 kafka | log.roll.jitter.ms = null kafka | log.roll.ms = null kafka | log.segment.bytes = 1073741824 kafka | log.segment.delete.delay.ms = 60000 kafka | max.connection.creation.rate = 2147483647 kafka | max.connections = 2147483647 kafka | max.connections.per.ip = 2147483647 kafka | max.connections.per.ip.overrides = kafka | max.incremental.fetch.session.cache.slots = 1000 kafka | message.max.bytes = 1048588 kafka | metadata.log.dir = null kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 kafka | metadata.log.max.snapshot.interval.ms = 3600000 kafka | metadata.log.segment.bytes = 1073741824 kafka | metadata.log.segment.min.bytes = 8388608 kafka | metadata.log.segment.ms = 604800000 kafka | metadata.max.idle.interval.ms = 500 kafka | metadata.max.retention.bytes = 104857600 kafka | metadata.max.retention.ms = 604800000 kafka | metric.reporters = [] kafka | metrics.num.samples = 2 kafka | metrics.recording.level = INFO kafka | metrics.sample.window.ms = 30000 kafka | min.insync.replicas = 1 kafka | node.id = 1 kafka | num.io.threads = 8 kafka | num.network.threads = 3 kafka | num.partitions = 1 kafka | num.recovery.threads.per.data.dir = 1 kafka | num.replica.alter.log.dirs.threads = null kafka | num.replica.fetchers = 1 kafka | offset.metadata.max.bytes = 4096 kafka | offsets.commit.required.acks = -1 kafka | offsets.commit.timeout.ms = 5000 kafka | offsets.load.buffer.size = 5242880 kafka | offsets.retention.check.interval.ms = 600000 kafka | offsets.retention.minutes = 10080 kafka | offsets.topic.compression.codec = 0 kafka | offsets.topic.num.partitions = 50 kafka | offsets.topic.replication.factor = 1 kafka | offsets.topic.segment.bytes = 104857600 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding kafka | password.encoder.iterations = 4096 kafka | password.encoder.key.length = 128 kafka | password.encoder.keyfactory.algorithm = null kafka | password.encoder.old.secret = null kafka | password.encoder.secret = null kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder kafka | process.roles = [] kafka | producer.id.expiration.check.interval.ms = 600000 kafka | producer.id.expiration.ms = 86400000 kafka | producer.purgatory.purge.interval.requests = 1000 kafka | queued.max.request.bytes = -1 kafka | queued.max.requests = 500 kafka | quota.window.num = 11 kafka | quota.window.size.seconds = 1 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 kafka | remote.log.manager.task.interval.ms = 30000 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 kafka | remote.log.manager.task.retry.backoff.ms = 500 kafka | remote.log.manager.task.retry.jitter = 0.2 kafka | remote.log.manager.thread.pool.size = 10 kafka | remote.log.metadata.custom.metadata.max.bytes = 128 kafka | remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager kafka | remote.log.metadata.manager.class.path = null kafka | remote.log.metadata.manager.impl.prefix = rlmm.config. kafka | remote.log.metadata.manager.listener.name = null kafka | remote.log.reader.max.pending.tasks = 100 kafka | remote.log.reader.threads = 10 kafka | remote.log.storage.manager.class.name = null kafka | remote.log.storage.manager.class.path = null kafka | remote.log.storage.manager.impl.prefix = rsm.config. kafka | remote.log.storage.system.enable = false kafka | replica.fetch.backoff.ms = 1000 kafka | replica.fetch.max.bytes = 1048576 kafka | replica.fetch.min.bytes = 1 kafka | replica.fetch.response.max.bytes = 10485760 kafka | replica.fetch.wait.max.ms = 500 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 kafka | replica.lag.time.max.ms = 30000 kafka | replica.selector.class = null kafka | replica.socket.receive.buffer.bytes = 65536 kafka | replica.socket.timeout.ms = 30000 kafka | replication.quota.window.num = 11 kafka | replication.quota.window.size.seconds = 1 kafka | request.timeout.ms = 30000 kafka | reserved.broker.max.id = 1000 kafka | sasl.client.callback.handler.class = null kafka | sasl.enabled.mechanisms = [GSSAPI] kafka | sasl.jaas.config = null kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | sasl.kerberos.min.time.before.relogin = 60000 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] kafka | sasl.kerberos.service.name = null kafka | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | sasl.login.callback.handler.class = null kafka | sasl.login.class = null kafka | sasl.login.connect.timeout.ms = null kafka | sasl.login.read.timeout.ms = null kafka | sasl.login.refresh.buffer.seconds = 300 kafka | sasl.login.refresh.min.period.seconds = 60 kafka | sasl.login.refresh.window.factor = 0.8 kafka | sasl.login.refresh.window.jitter = 0.05 kafka | sasl.login.retry.backoff.max.ms = 10000 kafka | sasl.login.retry.backoff.ms = 100 kafka | sasl.mechanism.controller.protocol = GSSAPI kafka | sasl.mechanism.inter.broker.protocol = GSSAPI kafka | sasl.oauthbearer.clock.skew.seconds = 30 kafka | sasl.oauthbearer.expected.audience = null kafka | sasl.oauthbearer.expected.issuer = null kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | sasl.oauthbearer.jwks.endpoint.url = null kafka | sasl.oauthbearer.scope.claim.name = scope kafka | sasl.oauthbearer.sub.claim.name = sub kafka | sasl.oauthbearer.token.endpoint.url = null kafka | sasl.server.callback.handler.class = null kafka | sasl.server.max.receive.size = 524288 kafka | security.inter.broker.protocol = PLAINTEXT kafka | security.providers = null kafka | server.max.startup.time.ms = 9223372036854775807 kafka | socket.connection.setup.timeout.max.ms = 30000 kafka | socket.connection.setup.timeout.ms = 10000 kafka | socket.listen.backlog.size = 50 kafka | socket.receive.buffer.bytes = 102400 kafka | socket.request.max.bytes = 104857600 kafka | socket.send.buffer.bytes = 102400 kafka | ssl.allow.dn.changes = false kafka | ssl.allow.san.changes = false kafka | ssl.cipher.suites = [] kafka | ssl.client.auth = none kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | ssl.endpoint.identification.algorithm = https kafka | ssl.engine.factory.class = null kafka | ssl.key.password = null kafka | ssl.keymanager.algorithm = SunX509 kafka | ssl.keystore.certificate.chain = null kafka | ssl.keystore.key = null kafka | ssl.keystore.location = null kafka | ssl.keystore.password = null kafka | ssl.keystore.type = JKS kafka | ssl.principal.mapping.rules = DEFAULT kafka | ssl.protocol = TLSv1.3 kafka | ssl.provider = null kafka | ssl.secure.random.implementation = null kafka | ssl.trustmanager.algorithm = PKIX kafka | ssl.truststore.certificates = null kafka | ssl.truststore.location = null kafka | ssl.truststore.password = null kafka | ssl.truststore.type = JKS kafka | telemetry.max.bytes = 1048576 kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 kafka | transaction.max.timeout.ms = 900000 kafka | transaction.partition.verification.enable = true kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 kafka | transaction.state.log.load.buffer.size = 5242880 kafka | transaction.state.log.min.isr = 2 kafka | transaction.state.log.num.partitions = 50 kafka | transaction.state.log.replication.factor = 3 kafka | transaction.state.log.segment.bytes = 104857600 kafka | transactional.id.expiration.ms = 604800000 kafka | unclean.leader.election.enable = false kafka | unstable.api.versions.enable = false kafka | unstable.metadata.versions.enable = false kafka | zookeeper.clientCnxnSocket = null kafka | zookeeper.connect = zookeeper:2181 kafka | zookeeper.connection.timeout.ms = null kafka | zookeeper.max.in.flight.requests = 10 kafka | zookeeper.metadata.migration.enable = false kafka | zookeeper.metadata.migration.min.batch.size = 200 kafka | zookeeper.session.timeout.ms = 18000 kafka | zookeeper.set.acl = false kafka | zookeeper.ssl.cipher.suites = null kafka | zookeeper.ssl.client.enable = false kafka | zookeeper.ssl.crl.enable = false kafka | zookeeper.ssl.enabled.protocols = null kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS kafka | zookeeper.ssl.keystore.location = null kafka | zookeeper.ssl.keystore.password = null kafka | zookeeper.ssl.keystore.type = null kafka | zookeeper.ssl.ocsp.enable = false kafka | zookeeper.ssl.protocol = TLSv1.2 kafka | zookeeper.ssl.truststore.location = null kafka | zookeeper.ssl.truststore.password = null kafka | zookeeper.ssl.truststore.type = null kafka | (kafka.server.KafkaConfig) kafka | [2024-09-09 17:02:08,146] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2024-09-09 17:02:08,146] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2024-09-09 17:02:08,147] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2024-09-09 17:02:08,148] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2024-09-09 17:02:08,153] INFO [KafkaServer id=1] Rewriting /var/lib/kafka/data/meta.properties (kafka.server.KafkaServer) kafka | [2024-09-09 17:02:08,290] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) kafka | [2024-09-09 17:02:08,296] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) kafka | [2024-09-09 17:02:08,303] INFO Loaded 0 logs in 12ms (kafka.log.LogManager) kafka | [2024-09-09 17:02:08,304] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) kafka | [2024-09-09 17:02:08,305] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) kafka | [2024-09-09 17:02:08,321] INFO Starting the log cleaner (kafka.log.LogCleaner) kafka | [2024-09-09 17:02:08,384] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) kafka | [2024-09-09 17:02:08,396] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) kafka | [2024-09-09 17:02:08,409] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) kafka | [2024-09-09 17:02:08,432] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.NodeToControllerRequestThread) kafka | [2024-09-09 17:02:08,731] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2024-09-09 17:02:08,747] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) kafka | [2024-09-09 17:02:08,747] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2024-09-09 17:02:08,751] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) kafka | [2024-09-09 17:02:08,755] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.NodeToControllerRequestThread) kafka | [2024-09-09 17:02:08,776] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-09-09 17:02:08,779] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-09-09 17:02:08,780] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-09-09 17:02:08,782] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-09-09 17:02:08,782] INFO [ExpirationReaper-1-RemoteFetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-09-09 17:02:08,794] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) kafka | [2024-09-09 17:02:08,795] INFO [AddPartitionsToTxnSenderThread-1]: Starting (kafka.server.AddPartitionsToTxnManager) kafka | [2024-09-09 17:02:08,822] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) kafka | [2024-09-09 17:02:08,848] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1725901328839,1725901328839,1,0,0,72057604941152257,258,0,27 kafka | (kafka.zk.KafkaZkClient) kafka | [2024-09-09 17:02:08,850] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) kafka | [2024-09-09 17:02:08,889] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) kafka | [2024-09-09 17:02:08,894] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-09-09 17:02:08,899] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-09-09 17:02:08,900] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-09-09 17:02:08,909] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) kafka | [2024-09-09 17:02:08,918] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) kafka | [2024-09-09 17:02:08,920] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-09-09 17:02:08,922] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) kafka | [2024-09-09 17:02:08,924] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-09-09 17:02:08,928] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) kafka | [2024-09-09 17:02:08,944] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2024-09-09 17:02:08,951] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) kafka | [2024-09-09 17:02:08,951] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2024-09-09 17:02:08,952] INFO [MetadataCache brokerId=1] Updated cache from existing None to latest Features(version=3.7-IV4, finalizedFeatures={}, finalizedFeaturesEpoch=0). (kafka.server.metadata.ZkMetadataCache) kafka | [2024-09-09 17:02:08,952] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) kafka | [2024-09-09 17:02:08,965] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) kafka | [2024-09-09 17:02:08,970] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) kafka | [2024-09-09 17:02:08,974] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) kafka | [2024-09-09 17:02:08,990] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) kafka | [2024-09-09 17:02:08,992] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-09-09 17:02:08,994] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) kafka | [2024-09-09 17:02:09,004] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) kafka | [2024-09-09 17:02:09,010] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) kafka | [2024-09-09 17:02:09,011] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) kafka | [2024-09-09 17:02:09,011] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) kafka | [2024-09-09 17:02:09,011] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) kafka | [2024-09-09 17:02:09,012] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) kafka | [2024-09-09 17:02:09,015] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) kafka | [2024-09-09 17:02:09,015] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) kafka | [2024-09-09 17:02:09,015] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) kafka | [2024-09-09 17:02:09,015] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) kafka | [2024-09-09 17:02:09,017] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) kafka | [2024-09-09 17:02:09,019] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) kafka | [2024-09-09 17:02:09,029] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) kafka | [2024-09-09 17:02:09,029] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) kafka | [2024-09-09 17:02:09,030] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) kafka | [2024-09-09 17:02:09,031] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) kafka | [2024-09-09 17:02:09,032] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) kafka | [2024-09-09 17:02:09,032] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) kafka | [2024-09-09 17:02:09,032] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) kafka | [2024-09-09 17:02:09,034] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) kafka | [2024-09-09 17:02:09,034] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) kafka | [2024-09-09 17:02:09,036] INFO [Controller id=1, targetBrokerId=1] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) kafka | [2024-09-09 17:02:09,039] WARN [Controller id=1, targetBrokerId=1] Connection to node 1 (kafka/172.17.0.6:9092) could not be established. Node may not be available. (org.apache.kafka.clients.NetworkClient) kafka | [2024-09-09 17:02:09,040] WARN [RequestSendThread controllerId=1] Controller 1's connection to broker kafka:9092 (id: 1 rack: null) was unsuccessful (kafka.controller.RequestSendThread) kafka | java.io.IOException: Connection to kafka:9092 (id: 1 rack: null) failed. kafka | at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:71) kafka | at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:298) kafka | at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:251) kafka | at org.apache.kafka.server.util.ShutdownableThread.run(ShutdownableThread.java:135) kafka | [2024-09-09 17:02:09,041] INFO [Controller id=1, targetBrokerId=1] Client requested connection close from node 1 (org.apache.kafka.clients.NetworkClient) kafka | [2024-09-09 17:02:09,042] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) kafka | [2024-09-09 17:02:09,042] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) kafka | [2024-09-09 17:02:09,042] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) kafka | [2024-09-09 17:02:09,042] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) kafka | [2024-09-09 17:02:09,043] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) kafka | [2024-09-09 17:02:09,043] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) kafka | [2024-09-09 17:02:09,047] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) kafka | [2024-09-09 17:02:09,048] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) kafka | [2024-09-09 17:02:09,053] INFO Kafka version: 7.7.0-ccs (org.apache.kafka.common.utils.AppInfoParser) kafka | [2024-09-09 17:02:09,053] INFO Kafka commitId: 342a7370342e6bbcecbdf171dbe71cf87ce67c49 (org.apache.kafka.common.utils.AppInfoParser) kafka | [2024-09-09 17:02:09,054] INFO Kafka startTimeMs: 1725901329050 (org.apache.kafka.common.utils.AppInfoParser) kafka | [2024-09-09 17:02:09,055] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) kafka | [2024-09-09 17:02:09,055] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) kafka | [2024-09-09 17:02:09,144] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) kafka | [2024-09-09 17:02:09,216] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-09-09 17:02:09,246] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new ZK controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.NodeToControllerRequestThread) kafka | [2024-09-09 17:02:09,264] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new ZK controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.NodeToControllerRequestThread) kafka | [2024-09-09 17:02:14,057] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) kafka | [2024-09-09 17:02:14,057] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) kafka | [2024-09-09 17:02:40,966] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2024-09-09 17:02:40,972] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2024-09-09 17:02:40,982] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) kafka | [2024-09-09 17:02:40,993] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) kafka | [2024-09-09 17:02:41,018] INFO [Controller id=1] New topics: [Set(policy-pdp-pap)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(E_evwaVoT72PUq11Zim4eg),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) kafka | [2024-09-09 17:02:41,018] INFO [Controller id=1] New partition creation callback for policy-pdp-pap-0 (kafka.controller.KafkaController) kafka | [2024-09-09 17:02:41,020] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-09-09 17:02:41,020] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2024-09-09 17:02:41,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-09-09 17:02:41,024] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2024-09-09 17:02:41,044] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-09-09 17:02:41,047] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) kafka | [2024-09-09 17:02:41,048] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 1 become-leader and 0 become-follower partitions (state.change.logger) kafka | [2024-09-09 17:02:41,050] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 1 partitions (state.change.logger) kafka | [2024-09-09 17:02:41,052] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-09-09 17:02:41,052] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2024-09-09 17:02:41,056] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 1 partitions (state.change.logger) kafka | [2024-09-09 17:02:41,057] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-09-09 17:02:41,073] INFO [Controller id=1] New topics: [Set(__consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(__consumer_offsets,Some(TBmmuV3aTUq2q6hMzAP1Hw),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) kafka | [2024-09-09 17:02:41,073] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-37,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) kafka | [2024-09-09 17:02:41,074] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-09-09 17:02:41,074] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-09-09 17:02:41,074] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-09-09 17:02:41,074] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-09-09 17:02:41,074] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-09-09 17:02:41,074] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-09-09 17:02:41,074] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-09-09 17:02:41,074] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-09-09 17:02:41,074] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-09-09 17:02:41,074] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-09-09 17:02:41,075] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-09-09 17:02:41,075] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-09-09 17:02:41,075] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-09-09 17:02:41,075] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-09-09 17:02:41,075] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-09-09 17:02:41,075] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-09-09 17:02:41,075] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-09-09 17:02:41,075] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-09-09 17:02:41,075] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-09-09 17:02:41,076] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-09-09 17:02:41,076] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-09-09 17:02:41,076] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-09-09 17:02:41,076] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-09-09 17:02:41,076] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-09-09 17:02:41,076] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-09-09 17:02:41,076] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2024-09-09 17:02:41,076] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-09-09 17:02:41,077] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-09-09 17:02:41,077] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-09-09 17:02:41,077] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-09-09 17:02:41,077] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(policy-pdp-pap-0) (kafka.server.ReplicaFetcherManager) kafka | [2024-09-09 17:02:41,077] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-09-09 17:02:41,077] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-09-09 17:02:41,077] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 1 partitions (state.change.logger) kafka | [2024-09-09 17:02:41,078] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-09-09 17:02:41,078] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-09-09 17:02:41,078] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-09-09 17:02:41,078] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-09-09 17:02:41,079] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-09-09 17:02:41,079] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-09-09 17:02:41,080] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-09-09 17:02:41,080] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-09-09 17:02:41,080] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-09-09 17:02:41,080] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-09-09 17:02:41,080] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-09-09 17:02:41,080] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-09-09 17:02:41,080] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-09-09 17:02:41,080] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-09-09 17:02:41,081] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-09-09 17:02:41,081] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-09-09 17:02:41,081] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-09-09 17:02:41,081] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-09-09 17:02:41,081] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-09-09 17:02:41,081] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2024-09-09 17:02:41,083] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-09-09 17:02:41,083] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-09-09 17:02:41,084] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-09-09 17:02:41,084] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-09-09 17:02:41,084] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-09-09 17:02:41,084] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-09-09 17:02:41,084] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-09-09 17:02:41,084] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-09-09 17:02:41,084] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-09-09 17:02:41,084] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-09-09 17:02:41,084] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-09-09 17:02:41,084] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-09-09 17:02:41,084] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-09-09 17:02:41,084] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-09-09 17:02:41,085] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-09-09 17:02:41,085] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-09-09 17:02:41,085] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-09-09 17:02:41,085] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-09-09 17:02:41,085] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-09-09 17:02:41,085] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-09-09 17:02:41,085] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-09-09 17:02:41,085] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-09-09 17:02:41,085] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-09-09 17:02:41,085] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-09-09 17:02:41,085] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-09-09 17:02:41,086] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-09-09 17:02:41,086] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-09-09 17:02:41,086] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-09-09 17:02:41,086] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-09-09 17:02:41,086] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-09-09 17:02:41,086] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-09-09 17:02:41,086] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-09-09 17:02:41,086] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-09-09 17:02:41,086] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-09-09 17:02:41,087] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-09-09 17:02:41,087] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-09-09 17:02:41,087] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-09-09 17:02:41,087] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-09-09 17:02:41,087] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-09-09 17:02:41,087] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-09-09 17:02:41,087] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-09-09 17:02:41,087] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-09-09 17:02:41,092] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-09-09 17:02:41,092] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-09-09 17:02:41,093] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-09-09 17:02:41,094] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-09-09 17:02:41,095] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-09-09 17:02:41,095] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-09-09 17:02:41,096] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-09-09 17:02:41,098] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-09-09 17:02:41,099] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2024-09-09 17:02:41,175] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-09-09 17:02:41,188] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) kafka | [2024-09-09 17:02:41,191] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,193] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,196] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(E_evwaVoT72PUq11Zim4eg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-09-09 17:02:41,267] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2024-09-09 17:02:41,279] INFO [Broker id=1] Finished LeaderAndIsr request in 224ms correlationId 1 from controller 1 for 1 partitions (state.change.logger) kafka | [2024-09-09 17:02:41,282] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=E_evwaVoT72PUq11Zim4eg, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-09-09 17:02:41,286] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-09-09 17:02:41,286] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-09-09 17:02:41,286] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-09-09 17:02:41,286] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-09-09 17:02:41,286] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-09-09 17:02:41,286] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-09-09 17:02:41,286] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-09-09 17:02:41,286] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-09-09 17:02:41,286] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-09-09 17:02:41,286] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-09-09 17:02:41,286] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-09-09 17:02:41,286] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-09-09 17:02:41,286] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-09-09 17:02:41,286] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-09-09 17:02:41,286] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-09-09 17:02:41,286] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-09-09 17:02:41,286] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-09-09 17:02:41,286] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-09-09 17:02:41,286] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-09-09 17:02:41,286] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-09-09 17:02:41,286] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-09-09 17:02:41,286] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-09-09 17:02:41,287] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-09-09 17:02:41,287] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-09-09 17:02:41,287] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-09-09 17:02:41,287] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-09-09 17:02:41,287] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-09-09 17:02:41,287] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-09-09 17:02:41,287] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-09-09 17:02:41,287] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-09-09 17:02:41,287] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-09-09 17:02:41,287] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-09-09 17:02:41,287] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-09-09 17:02:41,287] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-09-09 17:02:41,287] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-09-09 17:02:41,287] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-09-09 17:02:41,287] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-09-09 17:02:41,288] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-09-09 17:02:41,288] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-09-09 17:02:41,288] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-09-09 17:02:41,288] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-09-09 17:02:41,288] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-09-09 17:02:41,288] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-09-09 17:02:41,288] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-09-09 17:02:41,288] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-09-09 17:02:41,288] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-09-09 17:02:41,288] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-09-09 17:02:41,288] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-09-09 17:02:41,289] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-09-09 17:02:41,289] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-09-09 17:02:41,289] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) kafka | [2024-09-09 17:02:41,289] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) kafka | [2024-09-09 17:02:41,289] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) kafka | [2024-09-09 17:02:41,289] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) kafka | [2024-09-09 17:02:41,289] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) kafka | [2024-09-09 17:02:41,289] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-09-09 17:02:41,289] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) kafka | [2024-09-09 17:02:41,290] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) kafka | [2024-09-09 17:02:41,290] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) kafka | [2024-09-09 17:02:41,290] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) kafka | [2024-09-09 17:02:41,290] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) kafka | [2024-09-09 17:02:41,290] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) kafka | [2024-09-09 17:02:41,290] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) kafka | [2024-09-09 17:02:41,290] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) kafka | [2024-09-09 17:02:41,290] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) kafka | [2024-09-09 17:02:41,290] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) kafka | [2024-09-09 17:02:41,290] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) kafka | [2024-09-09 17:02:41,290] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) kafka | [2024-09-09 17:02:41,291] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) kafka | [2024-09-09 17:02:41,291] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) kafka | [2024-09-09 17:02:41,291] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) kafka | [2024-09-09 17:02:41,291] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) kafka | [2024-09-09 17:02:41,291] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) kafka | [2024-09-09 17:02:41,291] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) kafka | [2024-09-09 17:02:41,291] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) kafka | [2024-09-09 17:02:41,291] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) kafka | [2024-09-09 17:02:41,291] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) kafka | [2024-09-09 17:02:41,291] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) kafka | [2024-09-09 17:02:41,291] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) kafka | [2024-09-09 17:02:41,291] INFO [Broker id=1] Add 1 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-09-09 17:02:41,291] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) kafka | [2024-09-09 17:02:41,292] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) kafka | [2024-09-09 17:02:41,292] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) kafka | [2024-09-09 17:02:41,292] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) kafka | [2024-09-09 17:02:41,292] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) kafka | [2024-09-09 17:02:41,292] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) kafka | [2024-09-09 17:02:41,292] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) kafka | [2024-09-09 17:02:41,292] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) kafka | [2024-09-09 17:02:41,292] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) kafka | [2024-09-09 17:02:41,292] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) kafka | [2024-09-09 17:02:41,292] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) kafka | [2024-09-09 17:02:41,292] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) kafka | [2024-09-09 17:02:41,293] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) kafka | [2024-09-09 17:02:41,293] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) kafka | [2024-09-09 17:02:41,293] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) kafka | [2024-09-09 17:02:41,293] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-09-09 17:02:41,293] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) kafka | [2024-09-09 17:02:41,293] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) kafka | [2024-09-09 17:02:41,293] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) kafka | [2024-09-09 17:02:41,293] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) kafka | [2024-09-09 17:02:41,293] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) kafka | [2024-09-09 17:02:41,294] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) kafka | [2024-09-09 17:02:41,294] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) kafka | [2024-09-09 17:02:41,294] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 50 become-leader and 0 become-follower partitions (state.change.logger) kafka | [2024-09-09 17:02:41,294] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 50 partitions (state.change.logger) kafka | [2024-09-09 17:02:41,295] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-09-09 17:02:41,295] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-09-09 17:02:41,296] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-09-09 17:02:41,296] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-09-09 17:02:41,296] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-09-09 17:02:41,296] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-09-09 17:02:41,296] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-09-09 17:02:41,296] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-09-09 17:02:41,296] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-09-09 17:02:41,296] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-09-09 17:02:41,296] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-09-09 17:02:41,297] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-09-09 17:02:41,297] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-09-09 17:02:41,297] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-09-09 17:02:41,297] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-09-09 17:02:41,297] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-09-09 17:02:41,297] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-09-09 17:02:41,297] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-09-09 17:02:41,297] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-09-09 17:02:41,298] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-09-09 17:02:41,298] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-09-09 17:02:41,298] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-09-09 17:02:41,298] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-09-09 17:02:41,298] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-09-09 17:02:41,298] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-09-09 17:02:41,298] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-09-09 17:02:41,298] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-09-09 17:02:41,298] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-09-09 17:02:41,298] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-09-09 17:02:41,298] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-09-09 17:02:41,299] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-09-09 17:02:41,299] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-09-09 17:02:41,299] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-09-09 17:02:41,299] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-09-09 17:02:41,299] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-09-09 17:02:41,299] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-09-09 17:02:41,299] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-09-09 17:02:41,299] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-09-09 17:02:41,299] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-09-09 17:02:41,299] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-09-09 17:02:41,300] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-09-09 17:02:41,300] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-09-09 17:02:41,300] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-09-09 17:02:41,300] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-09-09 17:02:41,300] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-09-09 17:02:41,300] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-09-09 17:02:41,300] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-09-09 17:02:41,300] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-09-09 17:02:41,300] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-09-09 17:02:41,301] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-09-09 17:02:41,301] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2024-09-09 17:02:41,301] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 for 50 partitions (state.change.logger) kafka | [2024-09-09 17:02:41,301] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-09-09 17:02:41,302] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-09-09 17:02:41,302] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-09-09 17:02:41,303] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-09-09 17:02:41,303] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-09-09 17:02:41,303] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-09-09 17:02:41,303] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-09-09 17:02:41,303] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-09-09 17:02:41,303] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-09-09 17:02:41,304] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-09-09 17:02:41,304] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-09-09 17:02:41,304] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-09-09 17:02:41,304] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-09-09 17:02:41,304] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-09-09 17:02:41,305] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-09-09 17:02:41,305] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-09-09 17:02:41,305] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-09-09 17:02:41,305] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-09-09 17:02:41,305] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-09-09 17:02:41,305] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-09-09 17:02:41,306] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-09-09 17:02:41,306] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-09-09 17:02:41,306] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-09-09 17:02:41,306] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-09-09 17:02:41,306] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-09-09 17:02:41,306] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-09-09 17:02:41,307] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-09-09 17:02:41,307] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-09-09 17:02:41,307] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-09-09 17:02:41,307] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-09-09 17:02:41,307] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-09-09 17:02:41,307] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-09-09 17:02:41,308] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-09-09 17:02:41,308] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-09-09 17:02:41,308] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-09-09 17:02:41,308] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-09-09 17:02:41,308] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-09-09 17:02:41,309] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-09-09 17:02:41,310] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-09-09 17:02:41,310] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-09-09 17:02:41,310] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-09-09 17:02:41,311] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-09-09 17:02:41,311] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-09-09 17:02:41,311] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-09-09 17:02:41,311] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-09-09 17:02:41,311] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-09-09 17:02:41,311] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-09-09 17:02:41,312] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-09-09 17:02:41,318] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-09-09 17:02:41,319] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-09-09 17:02:41,333] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2024-09-09 17:02:41,333] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2024-09-09 17:02:41,333] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2024-09-09 17:02:41,334] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2024-09-09 17:02:41,334] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2024-09-09 17:02:41,334] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2024-09-09 17:02:41,334] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2024-09-09 17:02:41,334] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2024-09-09 17:02:41,334] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2024-09-09 17:02:41,334] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2024-09-09 17:02:41,334] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2024-09-09 17:02:41,334] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2024-09-09 17:02:41,334] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2024-09-09 17:02:41,334] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2024-09-09 17:02:41,334] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2024-09-09 17:02:41,335] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2024-09-09 17:02:41,335] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2024-09-09 17:02:41,335] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2024-09-09 17:02:41,335] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2024-09-09 17:02:41,335] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2024-09-09 17:02:41,335] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2024-09-09 17:02:41,335] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2024-09-09 17:02:41,335] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2024-09-09 17:02:41,335] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2024-09-09 17:02:41,335] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2024-09-09 17:02:41,335] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2024-09-09 17:02:41,335] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2024-09-09 17:02:41,336] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2024-09-09 17:02:41,336] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2024-09-09 17:02:41,336] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2024-09-09 17:02:41,336] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2024-09-09 17:02:41,336] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2024-09-09 17:02:41,336] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2024-09-09 17:02:41,336] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2024-09-09 17:02:41,336] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2024-09-09 17:02:41,337] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2024-09-09 17:02:41,337] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2024-09-09 17:02:41,337] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2024-09-09 17:02:41,337] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2024-09-09 17:02:41,337] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2024-09-09 17:02:41,337] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2024-09-09 17:02:41,337] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2024-09-09 17:02:41,337] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2024-09-09 17:02:41,338] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2024-09-09 17:02:41,338] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2024-09-09 17:02:41,338] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2024-09-09 17:02:41,338] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2024-09-09 17:02:41,338] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2024-09-09 17:02:41,338] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2024-09-09 17:02:41,338] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2024-09-09 17:02:41,339] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-37, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) kafka | [2024-09-09 17:02:41,339] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 3 from controller 1 epoch 1 as part of the become-leader transition for 50 partitions (state.change.logger) kafka | [2024-09-09 17:02:41,345] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-09-09 17:02:41,346] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-09-09 17:02:41,346] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,346] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,347] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(TBmmuV3aTUq2q6hMzAP1Hw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-09-09 17:02:41,357] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-09-09 17:02:41,358] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-09-09 17:02:41,358] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,358] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,358] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(TBmmuV3aTUq2q6hMzAP1Hw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-09-09 17:02:41,374] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-09-09 17:02:41,376] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-09-09 17:02:41,377] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,377] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,377] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(TBmmuV3aTUq2q6hMzAP1Hw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-09-09 17:02:41,391] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-09-09 17:02:41,394] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-09-09 17:02:41,394] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,394] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,395] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(TBmmuV3aTUq2q6hMzAP1Hw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-09-09 17:02:41,403] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-09-09 17:02:41,404] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-09-09 17:02:41,404] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,405] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,405] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(TBmmuV3aTUq2q6hMzAP1Hw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-09-09 17:02:41,416] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-09-09 17:02:41,419] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-09-09 17:02:41,419] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,420] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,420] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(TBmmuV3aTUq2q6hMzAP1Hw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-09-09 17:02:41,430] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-09-09 17:02:41,431] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-09-09 17:02:41,431] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,431] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,431] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(TBmmuV3aTUq2q6hMzAP1Hw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-09-09 17:02:41,443] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-09-09 17:02:41,444] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-09-09 17:02:41,444] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,444] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,445] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(TBmmuV3aTUq2q6hMzAP1Hw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-09-09 17:02:41,454] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-09-09 17:02:41,458] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-09-09 17:02:41,458] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,458] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,458] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(TBmmuV3aTUq2q6hMzAP1Hw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-09-09 17:02:41,466] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-09-09 17:02:41,467] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-09-09 17:02:41,467] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,467] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,467] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(TBmmuV3aTUq2q6hMzAP1Hw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-09-09 17:02:41,474] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-09-09 17:02:41,474] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-09-09 17:02:41,474] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,474] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,475] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(TBmmuV3aTUq2q6hMzAP1Hw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-09-09 17:02:41,481] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-09-09 17:02:41,482] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-09-09 17:02:41,482] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,482] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,482] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(TBmmuV3aTUq2q6hMzAP1Hw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-09-09 17:02:41,493] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-09-09 17:02:41,494] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-09-09 17:02:41,494] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,494] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,495] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(TBmmuV3aTUq2q6hMzAP1Hw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-09-09 17:02:41,503] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-09-09 17:02:41,504] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-09-09 17:02:41,504] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,504] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,505] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(TBmmuV3aTUq2q6hMzAP1Hw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-09-09 17:02:41,516] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-09-09 17:02:41,517] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-09-09 17:02:41,517] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,517] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,518] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(TBmmuV3aTUq2q6hMzAP1Hw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-09-09 17:02:41,528] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-09-09 17:02:41,528] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-09-09 17:02:41,528] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,528] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,528] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(TBmmuV3aTUq2q6hMzAP1Hw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-09-09 17:02:41,538] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-09-09 17:02:41,539] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-09-09 17:02:41,539] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,539] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,539] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(TBmmuV3aTUq2q6hMzAP1Hw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-09-09 17:02:41,554] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-09-09 17:02:41,555] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-09-09 17:02:41,556] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,556] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,556] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(TBmmuV3aTUq2q6hMzAP1Hw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-09-09 17:02:41,596] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-09-09 17:02:41,597] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-09-09 17:02:41,597] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,597] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,597] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(TBmmuV3aTUq2q6hMzAP1Hw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-09-09 17:02:41,602] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-09-09 17:02:41,602] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-09-09 17:02:41,602] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,607] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,607] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(TBmmuV3aTUq2q6hMzAP1Hw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-09-09 17:02:41,613] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-09-09 17:02:41,613] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-09-09 17:02:41,613] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,613] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,614] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(TBmmuV3aTUq2q6hMzAP1Hw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-09-09 17:02:41,619] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-09-09 17:02:41,619] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-09-09 17:02:41,619] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,619] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,619] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(TBmmuV3aTUq2q6hMzAP1Hw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-09-09 17:02:41,629] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-09-09 17:02:41,630] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-09-09 17:02:41,630] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,630] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,630] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(TBmmuV3aTUq2q6hMzAP1Hw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-09-09 17:02:41,635] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-09-09 17:02:41,638] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-09-09 17:02:41,638] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,638] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,639] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(TBmmuV3aTUq2q6hMzAP1Hw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-09-09 17:02:41,648] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-09-09 17:02:41,649] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-09-09 17:02:41,649] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,649] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,649] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(TBmmuV3aTUq2q6hMzAP1Hw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-09-09 17:02:41,655] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-09-09 17:02:41,656] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-09-09 17:02:41,657] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,657] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,657] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(TBmmuV3aTUq2q6hMzAP1Hw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-09-09 17:02:41,667] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-09-09 17:02:41,671] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-09-09 17:02:41,671] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,671] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,671] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(TBmmuV3aTUq2q6hMzAP1Hw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-09-09 17:02:41,679] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-09-09 17:02:41,681] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-09-09 17:02:41,681] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,681] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,681] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(TBmmuV3aTUq2q6hMzAP1Hw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-09-09 17:02:41,691] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-09-09 17:02:41,692] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-09-09 17:02:41,692] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,693] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,693] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(TBmmuV3aTUq2q6hMzAP1Hw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-09-09 17:02:41,699] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-09-09 17:02:41,702] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-09-09 17:02:41,702] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,702] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,702] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(TBmmuV3aTUq2q6hMzAP1Hw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-09-09 17:02:41,715] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-09-09 17:02:41,716] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-09-09 17:02:41,716] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,716] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,716] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(TBmmuV3aTUq2q6hMzAP1Hw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-09-09 17:02:41,726] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-09-09 17:02:41,734] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-09-09 17:02:41,734] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,734] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,735] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(TBmmuV3aTUq2q6hMzAP1Hw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-09-09 17:02:41,744] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-09-09 17:02:41,744] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-09-09 17:02:41,745] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,745] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,745] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(TBmmuV3aTUq2q6hMzAP1Hw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-09-09 17:02:41,752] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-09-09 17:02:41,755] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-09-09 17:02:41,755] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,755] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,755] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(TBmmuV3aTUq2q6hMzAP1Hw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-09-09 17:02:41,761] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-09-09 17:02:41,762] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-09-09 17:02:41,762] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,762] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,762] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(TBmmuV3aTUq2q6hMzAP1Hw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-09-09 17:02:41,770] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-09-09 17:02:41,770] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-09-09 17:02:41,770] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,770] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,771] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(TBmmuV3aTUq2q6hMzAP1Hw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-09-09 17:02:41,778] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-09-09 17:02:41,780] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-09-09 17:02:41,780] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,781] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,781] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(TBmmuV3aTUq2q6hMzAP1Hw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-09-09 17:02:41,802] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-09-09 17:02:41,804] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-09-09 17:02:41,804] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,804] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,805] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(TBmmuV3aTUq2q6hMzAP1Hw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-09-09 17:02:41,816] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-09-09 17:02:41,816] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-09-09 17:02:41,817] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,817] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,817] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(TBmmuV3aTUq2q6hMzAP1Hw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-09-09 17:02:41,825] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-09-09 17:02:41,825] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-09-09 17:02:41,825] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,826] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,826] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(TBmmuV3aTUq2q6hMzAP1Hw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-09-09 17:02:41,840] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-09-09 17:02:41,841] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-09-09 17:02:41,841] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,841] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,844] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(TBmmuV3aTUq2q6hMzAP1Hw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-09-09 17:02:41,856] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-09-09 17:02:41,857] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-09-09 17:02:41,858] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,858] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,859] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(TBmmuV3aTUq2q6hMzAP1Hw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-09-09 17:02:41,869] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-09-09 17:02:41,870] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-09-09 17:02:41,870] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,870] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,870] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(TBmmuV3aTUq2q6hMzAP1Hw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-09-09 17:02:41,882] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-09-09 17:02:41,883] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-09-09 17:02:41,883] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,883] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,883] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(TBmmuV3aTUq2q6hMzAP1Hw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-09-09 17:02:41,922] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-09-09 17:02:41,923] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-09-09 17:02:41,923] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,923] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,923] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(TBmmuV3aTUq2q6hMzAP1Hw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-09-09 17:02:41,930] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-09-09 17:02:41,931] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-09-09 17:02:41,931] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,931] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,931] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(TBmmuV3aTUq2q6hMzAP1Hw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-09-09 17:02:41,940] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-09-09 17:02:41,941] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-09-09 17:02:41,941] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,941] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,941] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(TBmmuV3aTUq2q6hMzAP1Hw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-09-09 17:02:41,948] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-09-09 17:02:41,948] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-09-09 17:02:41,948] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,948] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,949] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(TBmmuV3aTUq2q6hMzAP1Hw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-09-09 17:02:41,954] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-09-09 17:02:41,955] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-09-09 17:02:41,955] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,955] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,955] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(TBmmuV3aTUq2q6hMzAP1Hw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-09-09 17:02:41,963] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-09-09 17:02:41,964] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-09-09 17:02:41,964] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,964] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-09-09 17:02:41,964] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(TBmmuV3aTUq2q6hMzAP1Hw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-09-09 17:02:41,968] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2024-09-09 17:02:41,968] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2024-09-09 17:02:41,968] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2024-09-09 17:02:41,968] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2024-09-09 17:02:41,968] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2024-09-09 17:02:41,968] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2024-09-09 17:02:41,968] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2024-09-09 17:02:41,968] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2024-09-09 17:02:41,968] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2024-09-09 17:02:41,968] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2024-09-09 17:02:41,968] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2024-09-09 17:02:41,968] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2024-09-09 17:02:41,968] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2024-09-09 17:02:41,968] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2024-09-09 17:02:41,968] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2024-09-09 17:02:41,968] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2024-09-09 17:02:41,968] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2024-09-09 17:02:41,968] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2024-09-09 17:02:41,968] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2024-09-09 17:02:41,968] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2024-09-09 17:02:41,968] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2024-09-09 17:02:41,968] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2024-09-09 17:02:41,968] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2024-09-09 17:02:41,968] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2024-09-09 17:02:41,968] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2024-09-09 17:02:41,968] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2024-09-09 17:02:41,968] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2024-09-09 17:02:41,968] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2024-09-09 17:02:41,968] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2024-09-09 17:02:41,968] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2024-09-09 17:02:41,968] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2024-09-09 17:02:41,968] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2024-09-09 17:02:41,968] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2024-09-09 17:02:41,968] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2024-09-09 17:02:41,968] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2024-09-09 17:02:41,968] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2024-09-09 17:02:41,968] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2024-09-09 17:02:41,968] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2024-09-09 17:02:41,968] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2024-09-09 17:02:41,968] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2024-09-09 17:02:41,968] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2024-09-09 17:02:41,968] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2024-09-09 17:02:41,968] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2024-09-09 17:02:41,968] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2024-09-09 17:02:41,968] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2024-09-09 17:02:41,969] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2024-09-09 17:02:41,969] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2024-09-09 17:02:41,969] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2024-09-09 17:02:41,969] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2024-09-09 17:02:41,969] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2024-09-09 17:02:41,970] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-09-09 17:02:41,971] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,977] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 5 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,977] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-09-09 17:02:41,977] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,977] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,977] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-09-09 17:02:41,977] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,977] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,977] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-09-09 17:02:41,977] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,978] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,978] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-09-09 17:02:41,978] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,978] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,978] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-09-09 17:02:41,978] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,978] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,978] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-09-09 17:02:41,978] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,978] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,978] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-09-09 17:02:41,978] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,978] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,978] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-09-09 17:02:41,978] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,979] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 1 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,979] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-09-09 17:02:41,979] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,979] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,979] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-09-09 17:02:41,979] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,979] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,979] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-09-09 17:02:41,979] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,979] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,979] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-09-09 17:02:41,979] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,979] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,979] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-09-09 17:02:41,979] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,980] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 1 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,980] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-09-09 17:02:41,980] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,980] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,980] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-09-09 17:02:41,980] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,980] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,980] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-09-09 17:02:41,980] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,980] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,980] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-09-09 17:02:41,980] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,980] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,980] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-09-09 17:02:41,980] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,981] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 1 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,981] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-09-09 17:02:41,981] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,981] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,981] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-09-09 17:02:41,981] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,981] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,981] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-09-09 17:02:41,981] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,981] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,981] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-09-09 17:02:41,981] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,981] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,981] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-09-09 17:02:41,981] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,981] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,981] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-09-09 17:02:41,981] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,981] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,981] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-09-09 17:02:41,981] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,981] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,982] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-09-09 17:02:41,982] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,982] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,982] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-09-09 17:02:41,982] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,982] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,982] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-09-09 17:02:41,982] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,982] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,982] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-09-09 17:02:41,982] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,982] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,982] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-09-09 17:02:41,982] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,982] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,982] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-09-09 17:02:41,982] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,982] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,982] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-09-09 17:02:41,982] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,982] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,982] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-09-09 17:02:41,982] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,983] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,983] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-09-09 17:02:41,983] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,983] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,983] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-09-09 17:02:41,983] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,983] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,983] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-09-09 17:02:41,983] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,983] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,983] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-09-09 17:02:41,983] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,983] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,983] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-09-09 17:02:41,983] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,983] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,983] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-09-09 17:02:41,983] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,983] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,983] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-09-09 17:02:41,983] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,983] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,983] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-09-09 17:02:41,983] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,983] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-09-09 17:02:41,983] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,983] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-09-09 17:02:41,983] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,983] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-09-09 17:02:41,984] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,984] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-09-09 17:02:41,984] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,984] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-09-09 17:02:41,984] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,984] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-09-09 17:02:41,984] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,984] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-09-09 17:02:41,984] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,984] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-09-09 17:02:41,984] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,984] INFO [Broker id=1] Finished LeaderAndIsr request in 683ms correlationId 3 from controller 1 for 50 partitions (state.change.logger) kafka | [2024-09-09 17:02:41,985] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=TBmmuV3aTUq2q6hMzAP1Hw, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 3 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-09-09 17:02:41,985] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,985] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,985] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,985] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,985] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,986] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,986] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,986] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,986] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-09-09 17:02:41,987] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-09-09 17:02:41,987] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-09-09 17:02:41,987] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-09-09 17:02:41,987] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-09-09 17:02:41,987] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-09-09 17:02:41,987] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-09-09 17:02:41,987] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-09-09 17:02:41,987] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-09-09 17:02:41,987] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-09-09 17:02:41,987] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-09-09 17:02:41,987] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-09-09 17:02:41,987] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-09-09 17:02:41,987] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-09-09 17:02:41,987] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-09-09 17:02:41,987] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-09-09 17:02:41,987] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-09-09 17:02:41,987] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-09-09 17:02:41,987] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-09-09 17:02:41,987] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-09-09 17:02:41,987] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-09-09 17:02:41,987] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-09-09 17:02:41,987] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-09-09 17:02:41,987] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-09-09 17:02:41,988] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-09-09 17:02:41,988] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-09-09 17:02:41,988] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-09-09 17:02:41,988] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-09-09 17:02:41,988] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-09-09 17:02:41,988] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-09-09 17:02:41,988] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-09-09 17:02:41,988] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-09-09 17:02:41,988] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-09-09 17:02:41,988] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-09-09 17:02:41,988] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-09-09 17:02:41,988] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-09-09 17:02:41,988] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-09-09 17:02:41,988] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-09-09 17:02:41,988] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-09-09 17:02:41,988] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-09-09 17:02:41,988] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-09-09 17:02:41,988] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-09-09 17:02:41,988] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-09-09 17:02:41,988] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-09-09 17:02:41,988] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-09-09 17:02:41,988] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-09-09 17:02:41,988] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-09-09 17:02:41,988] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-09-09 17:02:41,988] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-09-09 17:02:41,988] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-09-09 17:02:41,988] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-09-09 17:02:41,988] INFO [Broker id=1] Add 50 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-09-09 17:02:41,988] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 4 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-09-09 17:02:42,053] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 89b80bf9-d0e1-47c1-bb80-7e89913e9ace in Empty state. Created a new member id consumer-89b80bf9-d0e1-47c1-bb80-7e89913e9ace-3-74a3f7ac-6112-4af5-9311-f57b67b8c823 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-09-09 17:02:42,055] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-3f9370f4-1336-4690-a98f-ad84b779dfe9 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-09-09 17:02:42,068] INFO [GroupCoordinator 1]: Preparing to rebalance group 89b80bf9-d0e1-47c1-bb80-7e89913e9ace in state PreparingRebalance with old generation 0 (__consumer_offsets-4) (reason: Adding new member consumer-89b80bf9-d0e1-47c1-bb80-7e89913e9ace-3-74a3f7ac-6112-4af5-9311-f57b67b8c823 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-09-09 17:02:42,068] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-3f9370f4-1336-4690-a98f-ad84b779dfe9 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-09-09 17:02:42,290] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 6296ea75-54c9-4370-bdfb-1d30b23d27b4 in Empty state. Created a new member id consumer-6296ea75-54c9-4370-bdfb-1d30b23d27b4-2-10c1dc2b-4a4a-4e14-88dd-14af3245aea9 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-09-09 17:02:42,292] INFO [GroupCoordinator 1]: Preparing to rebalance group 6296ea75-54c9-4370-bdfb-1d30b23d27b4 in state PreparingRebalance with old generation 0 (__consumer_offsets-6) (reason: Adding new member consumer-6296ea75-54c9-4370-bdfb-1d30b23d27b4-2-10c1dc2b-4a4a-4e14-88dd-14af3245aea9 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-09-09 17:02:45,080] INFO [GroupCoordinator 1]: Stabilized group 89b80bf9-d0e1-47c1-bb80-7e89913e9ace generation 1 (__consumer_offsets-4) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-09-09 17:02:45,083] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-09-09 17:02:45,105] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-3f9370f4-1336-4690-a98f-ad84b779dfe9 for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-09-09 17:02:45,106] INFO [GroupCoordinator 1]: Assignment received from leader consumer-89b80bf9-d0e1-47c1-bb80-7e89913e9ace-3-74a3f7ac-6112-4af5-9311-f57b67b8c823 for group 89b80bf9-d0e1-47c1-bb80-7e89913e9ace for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-09-09 17:02:45,293] INFO [GroupCoordinator 1]: Stabilized group 6296ea75-54c9-4370-bdfb-1d30b23d27b4 generation 1 (__consumer_offsets-6) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-09-09 17:02:45,306] INFO [GroupCoordinator 1]: Assignment received from leader consumer-6296ea75-54c9-4370-bdfb-1d30b23d27b4-2-10c1dc2b-4a4a-4e14-88dd-14af3245aea9 for group 6296ea75-54c9-4370-bdfb-1d30b23d27b4 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) =================================== ======== Logs from mariadb ======== mariadb | 2024-09-09 17:02:04+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. mariadb | 2024-09-09 17:02:04+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' mariadb | 2024-09-09 17:02:04+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. mariadb | 2024-09-09 17:02:04+00:00 [Note] [Entrypoint]: Initializing database files mariadb | 2024-09-09 17:02:04 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-09-09 17:02:04 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-09-09 17:02:04 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | mariadb | mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! mariadb | To do so, start the server, then issue the following command: mariadb | mariadb | '/usr/bin/mysql_secure_installation' mariadb | mariadb | which will also give you the option of removing the test mariadb | databases and anonymous user created by default. This is mariadb | strongly recommended for production servers. mariadb | mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb mariadb | mariadb | Please report any problems at https://mariadb.org/jira mariadb | mariadb | The latest information about MariaDB is available at https://mariadb.org/. mariadb | mariadb | Consider joining MariaDB's strong and vibrant community: mariadb | https://mariadb.org/get-involved/ mariadb | mariadb | 2024-09-09 17:02:05+00:00 [Note] [Entrypoint]: Database files initialized mariadb | 2024-09-09 17:02:05+00:00 [Note] [Entrypoint]: Starting temporary server mariadb | 2024-09-09 17:02:05+00:00 [Note] [Entrypoint]: Waiting for server startup mariadb | 2024-09-09 17:02:06 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 98 ... mariadb | 2024-09-09 17:02:06 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 mariadb | 2024-09-09 17:02:06 0 [Note] InnoDB: Number of transaction pools: 1 mariadb | 2024-09-09 17:02:06 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions mariadb | 2024-09-09 17:02:06 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) mariadb | 2024-09-09 17:02:06 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-09-09 17:02:06 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-09-09 17:02:06 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB mariadb | 2024-09-09 17:02:06 0 [Note] InnoDB: Completed initialization of buffer pool mariadb | 2024-09-09 17:02:06 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) mariadb | 2024-09-09 17:02:06 0 [Note] InnoDB: 128 rollback segments are active. mariadb | 2024-09-09 17:02:06 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... mariadb | 2024-09-09 17:02:06 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. mariadb | 2024-09-09 17:02:06 0 [Note] InnoDB: log sequence number 46590; transaction id 14 mariadb | 2024-09-09 17:02:06 0 [Note] Plugin 'FEEDBACK' is disabled. mariadb | 2024-09-09 17:02:06 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | 2024-09-09 17:02:06 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. mariadb | 2024-09-09 17:02:06 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. mariadb | 2024-09-09 17:02:06 0 [Note] mariadbd: ready for connections. mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution mariadb | 2024-09-09 17:02:06+00:00 [Note] [Entrypoint]: Temporary server started. mariadb | 2024-09-09 17:02:08+00:00 [Note] [Entrypoint]: Creating user policy_user mariadb | 2024-09-09 17:02:08+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) mariadb | mariadb | 2024-09-09 17:02:08+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf mariadb | mariadb | 2024-09-09 17:02:08+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh mariadb | #!/bin/bash -xv mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. mariadb | # mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); mariadb | # you may not use this file except in compliance with the License. mariadb | # You may obtain a copy of the License at mariadb | # mariadb | # http://www.apache.org/licenses/LICENSE-2.0 mariadb | # mariadb | # Unless required by applicable law or agreed to in writing, software mariadb | # distributed under the License is distributed on an "AS IS" BASIS, mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. mariadb | # See the License for the specific language governing permissions and mariadb | # limitations under the License. mariadb | mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | do mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" mariadb | done mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp mariadb | mariadb | 2024-09-09 17:02:09+00:00 [Note] [Entrypoint]: Stopping temporary server mariadb | 2024-09-09 17:02:09 0 [Note] mariadbd (initiated by: unknown): Normal shutdown mariadb | 2024-09-09 17:02:09 0 [Note] InnoDB: FTS optimize thread exiting. mariadb | 2024-09-09 17:02:09 0 [Note] InnoDB: Starting shutdown... mariadb | 2024-09-09 17:02:09 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool mariadb | 2024-09-09 17:02:09 0 [Note] InnoDB: Buffer pool(s) dump completed at 240909 17:02:09 mariadb | 2024-09-09 17:02:09 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" mariadb | 2024-09-09 17:02:09 0 [Note] InnoDB: Shutdown completed; log sequence number 347251; transaction id 298 mariadb | 2024-09-09 17:02:09 0 [Note] mariadbd: Shutdown complete mariadb | mariadb | 2024-09-09 17:02:09+00:00 [Note] [Entrypoint]: Temporary server stopped mariadb | mariadb | 2024-09-09 17:02:09+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. mariadb | mariadb | 2024-09-09 17:02:09 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... mariadb | 2024-09-09 17:02:10 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 mariadb | 2024-09-09 17:02:10 0 [Note] InnoDB: Number of transaction pools: 1 mariadb | 2024-09-09 17:02:10 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions mariadb | 2024-09-09 17:02:10 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) mariadb | 2024-09-09 17:02:10 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-09-09 17:02:10 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-09-09 17:02:10 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB mariadb | 2024-09-09 17:02:10 0 [Note] InnoDB: Completed initialization of buffer pool mariadb | 2024-09-09 17:02:10 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) mariadb | 2024-09-09 17:02:10 0 [Note] InnoDB: 128 rollback segments are active. mariadb | 2024-09-09 17:02:10 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... mariadb | 2024-09-09 17:02:10 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. mariadb | 2024-09-09 17:02:10 0 [Note] InnoDB: log sequence number 347251; transaction id 299 mariadb | 2024-09-09 17:02:10 0 [Note] Plugin 'FEEDBACK' is disabled. mariadb | 2024-09-09 17:02:10 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool mariadb | 2024-09-09 17:02:10 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | 2024-09-09 17:02:10 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. mariadb | 2024-09-09 17:02:10 0 [Note] Server socket created on IP: '0.0.0.0'. mariadb | 2024-09-09 17:02:10 0 [Note] Server socket created on IP: '::'. mariadb | 2024-09-09 17:02:10 0 [Note] mariadbd: ready for connections. mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution mariadb | 2024-09-09 17:02:10 0 [Note] InnoDB: Buffer pool(s) load completed at 240909 17:02:10 mariadb | 2024-09-09 17:02:10 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.10' (This connection closed normally without authentication) mariadb | 2024-09-09 17:02:10 4 [Warning] Aborted connection 4 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.9' (This connection closed normally without authentication) mariadb | 2024-09-09 17:02:10 5 [Warning] Aborted connection 5 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.8' (This connection closed normally without authentication) mariadb | 2024-09-09 17:02:11 6 [Warning] Aborted connection 6 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.7' (This connection closed normally without authentication) =================================== ======== Logs from apex-pdp ======== policy-apex-pdp | Waiting for mariadb port 3306... policy-apex-pdp | mariadb (172.17.0.3:3306) open policy-apex-pdp | Waiting for kafka port 9092... policy-apex-pdp | kafka (172.17.0.6:9092) open policy-apex-pdp | Waiting for pap port 6969... policy-apex-pdp | pap (172.17.0.9:6969) open policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' policy-apex-pdp | [2024-09-09T17:02:41.406+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] policy-apex-pdp | [2024-09-09T17:02:41.589+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-apex-pdp | allow.auto.create.topics = true policy-apex-pdp | auto.commit.interval.ms = 5000 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | auto.offset.reset = latest policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | check.crcs = true policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = consumer-6296ea75-54c9-4370-bdfb-1d30b23d27b4-1 policy-apex-pdp | client.rack = policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | default.api.timeout.ms = 60000 policy-apex-pdp | enable.auto.commit = true policy-apex-pdp | exclude.internal.topics = true policy-apex-pdp | fetch.max.bytes = 52428800 policy-apex-pdp | fetch.max.wait.ms = 500 policy-apex-pdp | fetch.min.bytes = 1 policy-apex-pdp | group.id = 6296ea75-54c9-4370-bdfb-1d30b23d27b4 policy-apex-pdp | group.instance.id = null policy-apex-pdp | heartbeat.interval.ms = 3000 policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | internal.leave.group.on.close = true policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-apex-pdp | isolation.level = read_uncommitted policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | max.partition.fetch.bytes = 1048576 policy-apex-pdp | max.poll.interval.ms = 300000 policy-apex-pdp | max.poll.records = 500 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-apex-pdp | receive.buffer.bytes = 65536 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.callback.handler.class = null policy-apex-pdp | sasl.login.class = null policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-apex-pdp | sasl.login.read.timeout.ms = null policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | sasl.mechanism = GSSAPI policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | security.protocol = PLAINTEXT policy-apex-pdp | security.providers = null policy-apex-pdp | send.buffer.bytes = 131072 policy-apex-pdp | session.timeout.ms = 45000 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | ssl.cipher.suites = null policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-apex-pdp | ssl.engine.factory.class = null policy-apex-pdp | ssl.key.password = null policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | ssl.keystore.certificate.chain = null policy-apex-pdp | ssl.keystore.key = null policy-apex-pdp | ssl.keystore.location = null policy-apex-pdp | ssl.keystore.password = null policy-apex-pdp | ssl.keystore.type = JKS policy-apex-pdp | ssl.protocol = TLSv1.3 policy-apex-pdp | ssl.provider = null policy-apex-pdp | ssl.secure.random.implementation = null policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | ssl.truststore.certificates = null policy-apex-pdp | ssl.truststore.location = null policy-apex-pdp | ssl.truststore.password = null policy-apex-pdp | ssl.truststore.type = JKS policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | policy-apex-pdp | [2024-09-09T17:02:41.803+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-apex-pdp | [2024-09-09T17:02:41.803+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-apex-pdp | [2024-09-09T17:02:41.804+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1725901361802 policy-apex-pdp | [2024-09-09T17:02:41.808+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-6296ea75-54c9-4370-bdfb-1d30b23d27b4-1, groupId=6296ea75-54c9-4370-bdfb-1d30b23d27b4] Subscribed to topic(s): policy-pdp-pap policy-apex-pdp | [2024-09-09T17:02:41.825+00:00|INFO|ServiceManager|main] service manager starting policy-apex-pdp | [2024-09-09T17:02:41.825+00:00|INFO|ServiceManager|main] service manager starting topics policy-apex-pdp | [2024-09-09T17:02:41.827+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=6296ea75-54c9-4370-bdfb-1d30b23d27b4, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting policy-apex-pdp | [2024-09-09T17:02:41.874+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-apex-pdp | allow.auto.create.topics = true policy-apex-pdp | auto.commit.interval.ms = 5000 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | auto.offset.reset = latest policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | check.crcs = true policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = consumer-6296ea75-54c9-4370-bdfb-1d30b23d27b4-2 policy-apex-pdp | client.rack = policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | default.api.timeout.ms = 60000 policy-apex-pdp | enable.auto.commit = true policy-apex-pdp | exclude.internal.topics = true policy-apex-pdp | fetch.max.bytes = 52428800 policy-apex-pdp | fetch.max.wait.ms = 500 policy-apex-pdp | fetch.min.bytes = 1 policy-apex-pdp | group.id = 6296ea75-54c9-4370-bdfb-1d30b23d27b4 policy-apex-pdp | group.instance.id = null policy-apex-pdp | heartbeat.interval.ms = 3000 policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | internal.leave.group.on.close = true policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-apex-pdp | isolation.level = read_uncommitted policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | max.partition.fetch.bytes = 1048576 policy-apex-pdp | max.poll.interval.ms = 300000 policy-apex-pdp | max.poll.records = 500 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-apex-pdp | receive.buffer.bytes = 65536 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.callback.handler.class = null policy-apex-pdp | sasl.login.class = null policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-apex-pdp | sasl.login.read.timeout.ms = null policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | sasl.mechanism = GSSAPI policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | security.protocol = PLAINTEXT policy-apex-pdp | security.providers = null policy-apex-pdp | send.buffer.bytes = 131072 policy-apex-pdp | session.timeout.ms = 45000 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | ssl.cipher.suites = null policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-apex-pdp | ssl.engine.factory.class = null policy-apex-pdp | ssl.key.password = null policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | ssl.keystore.certificate.chain = null policy-apex-pdp | ssl.keystore.key = null policy-apex-pdp | ssl.keystore.location = null policy-apex-pdp | ssl.keystore.password = null policy-apex-pdp | ssl.keystore.type = JKS policy-apex-pdp | ssl.protocol = TLSv1.3 policy-apex-pdp | ssl.provider = null policy-apex-pdp | ssl.secure.random.implementation = null policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | ssl.truststore.certificates = null policy-apex-pdp | ssl.truststore.location = null policy-apex-pdp | ssl.truststore.password = null policy-apex-pdp | ssl.truststore.type = JKS policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | policy-apex-pdp | [2024-09-09T17:02:41.886+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-apex-pdp | [2024-09-09T17:02:41.886+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-apex-pdp | [2024-09-09T17:02:41.886+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1725901361886 policy-apex-pdp | [2024-09-09T17:02:41.886+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-6296ea75-54c9-4370-bdfb-1d30b23d27b4-2, groupId=6296ea75-54c9-4370-bdfb-1d30b23d27b4] Subscribed to topic(s): policy-pdp-pap policy-apex-pdp | [2024-09-09T17:02:41.887+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=3f0f2b04-dd36-4c05-845a-37949086972a, alive=false, publisher=null]]: starting policy-apex-pdp | [2024-09-09T17:02:41.899+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-apex-pdp | acks = -1 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | batch.size = 16384 policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | buffer.memory = 33554432 policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = producer-1 policy-apex-pdp | compression.type = none policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | delivery.timeout.ms = 120000 policy-apex-pdp | enable.idempotence = true policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-apex-pdp | linger.ms = 0 policy-apex-pdp | max.block.ms = 60000 policy-apex-pdp | max.in.flight.requests.per.connection = 5 policy-apex-pdp | max.request.size = 1048576 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metadata.max.idle.ms = 300000 policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partitioner.adaptive.partitioning.enable = true policy-apex-pdp | partitioner.availability.timeout.ms = 0 policy-apex-pdp | partitioner.class = null policy-apex-pdp | partitioner.ignore.keys = false policy-apex-pdp | receive.buffer.bytes = 32768 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retries = 2147483647 policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.callback.handler.class = null policy-apex-pdp | sasl.login.class = null policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-apex-pdp | sasl.login.read.timeout.ms = null policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | sasl.mechanism = GSSAPI policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | security.protocol = PLAINTEXT policy-apex-pdp | security.providers = null policy-apex-pdp | send.buffer.bytes = 131072 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | ssl.cipher.suites = null policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-apex-pdp | ssl.engine.factory.class = null policy-apex-pdp | ssl.key.password = null policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | ssl.keystore.certificate.chain = null policy-apex-pdp | ssl.keystore.key = null policy-apex-pdp | ssl.keystore.location = null policy-apex-pdp | ssl.keystore.password = null policy-apex-pdp | ssl.keystore.type = JKS policy-apex-pdp | ssl.protocol = TLSv1.3 policy-apex-pdp | ssl.provider = null policy-apex-pdp | ssl.secure.random.implementation = null policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | ssl.truststore.certificates = null policy-apex-pdp | ssl.truststore.location = null policy-apex-pdp | ssl.truststore.password = null policy-apex-pdp | ssl.truststore.type = JKS policy-apex-pdp | transaction.timeout.ms = 60000 policy-apex-pdp | transactional.id = null policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-apex-pdp | policy-apex-pdp | [2024-09-09T17:02:41.922+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-apex-pdp | [2024-09-09T17:02:41.959+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-apex-pdp | [2024-09-09T17:02:41.959+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-apex-pdp | [2024-09-09T17:02:41.959+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1725901361959 policy-apex-pdp | [2024-09-09T17:02:41.959+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=3f0f2b04-dd36-4c05-845a-37949086972a, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-apex-pdp | [2024-09-09T17:02:41.959+00:00|INFO|ServiceManager|main] service manager starting set alive policy-apex-pdp | [2024-09-09T17:02:41.959+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object policy-apex-pdp | [2024-09-09T17:02:41.965+00:00|INFO|ServiceManager|main] service manager starting topic sinks policy-apex-pdp | [2024-09-09T17:02:41.965+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher policy-apex-pdp | [2024-09-09T17:02:41.970+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener policy-apex-pdp | [2024-09-09T17:02:41.970+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher policy-apex-pdp | [2024-09-09T17:02:41.970+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher policy-apex-pdp | [2024-09-09T17:02:41.970+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=6296ea75-54c9-4370-bdfb-1d30b23d27b4, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@60a2630a policy-apex-pdp | [2024-09-09T17:02:41.970+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=6296ea75-54c9-4370-bdfb-1d30b23d27b4, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted policy-apex-pdp | [2024-09-09T17:02:41.970+00:00|INFO|ServiceManager|main] service manager starting Create REST server policy-apex-pdp | [2024-09-09T17:02:41.993+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: policy-apex-pdp | [] policy-apex-pdp | [2024-09-09T17:02:41.994+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"7b09bb38-2da4-428c-b064-078963b35bd1","timestampMs":1725901361974,"name":"apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d","pdpGroup":"defaultGroup"} policy-apex-pdp | [2024-09-09T17:02:42.152+00:00|INFO|ServiceManager|main] service manager starting Rest Server policy-apex-pdp | [2024-09-09T17:02:42.152+00:00|INFO|ServiceManager|main] service manager starting policy-apex-pdp | [2024-09-09T17:02:42.152+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters policy-apex-pdp | [2024-09-09T17:02:42.152+00:00|INFO|JettyServletServer|main] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@1ac85b0c{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3dd69f5a{/,null,STOPPED}, connector=RestServerParameters@63c5efee{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-apex-pdp | [2024-09-09T17:02:42.162+00:00|INFO|ServiceManager|main] service manager started policy-apex-pdp | [2024-09-09T17:02:42.162+00:00|INFO|ServiceManager|main] service manager started policy-apex-pdp | [2024-09-09T17:02:42.162+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. policy-apex-pdp | [2024-09-09T17:02:42.162+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@1ac85b0c{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3dd69f5a{/,null,STOPPED}, connector=RestServerParameters@63c5efee{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-apex-pdp | [2024-09-09T17:02:42.271+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: hj4zLbq6T_aCBjzSL5dtSA policy-apex-pdp | [2024-09-09T17:02:42.271+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6296ea75-54c9-4370-bdfb-1d30b23d27b4-2, groupId=6296ea75-54c9-4370-bdfb-1d30b23d27b4] Cluster ID: hj4zLbq6T_aCBjzSL5dtSA policy-apex-pdp | [2024-09-09T17:02:42.273+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 policy-apex-pdp | [2024-09-09T17:02:42.273+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6296ea75-54c9-4370-bdfb-1d30b23d27b4-2, groupId=6296ea75-54c9-4370-bdfb-1d30b23d27b4] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-apex-pdp | [2024-09-09T17:02:42.279+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6296ea75-54c9-4370-bdfb-1d30b23d27b4-2, groupId=6296ea75-54c9-4370-bdfb-1d30b23d27b4] (Re-)joining group policy-apex-pdp | [2024-09-09T17:02:42.291+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6296ea75-54c9-4370-bdfb-1d30b23d27b4-2, groupId=6296ea75-54c9-4370-bdfb-1d30b23d27b4] Request joining group due to: need to re-join with the given member-id: consumer-6296ea75-54c9-4370-bdfb-1d30b23d27b4-2-10c1dc2b-4a4a-4e14-88dd-14af3245aea9 policy-apex-pdp | [2024-09-09T17:02:42.291+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6296ea75-54c9-4370-bdfb-1d30b23d27b4-2, groupId=6296ea75-54c9-4370-bdfb-1d30b23d27b4] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-apex-pdp | [2024-09-09T17:02:42.291+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6296ea75-54c9-4370-bdfb-1d30b23d27b4-2, groupId=6296ea75-54c9-4370-bdfb-1d30b23d27b4] (Re-)joining group policy-apex-pdp | [2024-09-09T17:02:42.774+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls policy-apex-pdp | [2024-09-09T17:02:42.774+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls policy-apex-pdp | [2024-09-09T17:02:45.295+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6296ea75-54c9-4370-bdfb-1d30b23d27b4-2, groupId=6296ea75-54c9-4370-bdfb-1d30b23d27b4] Successfully joined group with generation Generation{generationId=1, memberId='consumer-6296ea75-54c9-4370-bdfb-1d30b23d27b4-2-10c1dc2b-4a4a-4e14-88dd-14af3245aea9', protocol='range'} policy-apex-pdp | [2024-09-09T17:02:45.301+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6296ea75-54c9-4370-bdfb-1d30b23d27b4-2, groupId=6296ea75-54c9-4370-bdfb-1d30b23d27b4] Finished assignment for group at generation 1: {consumer-6296ea75-54c9-4370-bdfb-1d30b23d27b4-2-10c1dc2b-4a4a-4e14-88dd-14af3245aea9=Assignment(partitions=[policy-pdp-pap-0])} policy-apex-pdp | [2024-09-09T17:02:45.312+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6296ea75-54c9-4370-bdfb-1d30b23d27b4-2, groupId=6296ea75-54c9-4370-bdfb-1d30b23d27b4] Successfully synced group in generation Generation{generationId=1, memberId='consumer-6296ea75-54c9-4370-bdfb-1d30b23d27b4-2-10c1dc2b-4a4a-4e14-88dd-14af3245aea9', protocol='range'} policy-apex-pdp | [2024-09-09T17:02:45.312+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6296ea75-54c9-4370-bdfb-1d30b23d27b4-2, groupId=6296ea75-54c9-4370-bdfb-1d30b23d27b4] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-apex-pdp | [2024-09-09T17:02:45.314+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6296ea75-54c9-4370-bdfb-1d30b23d27b4-2, groupId=6296ea75-54c9-4370-bdfb-1d30b23d27b4] Adding newly assigned partitions: policy-pdp-pap-0 policy-apex-pdp | [2024-09-09T17:02:45.321+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6296ea75-54c9-4370-bdfb-1d30b23d27b4-2, groupId=6296ea75-54c9-4370-bdfb-1d30b23d27b4] Found no committed offset for partition policy-pdp-pap-0 policy-apex-pdp | [2024-09-09T17:02:45.331+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6296ea75-54c9-4370-bdfb-1d30b23d27b4-2, groupId=6296ea75-54c9-4370-bdfb-1d30b23d27b4] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-apex-pdp | [2024-09-09T17:02:56.160+00:00|INFO|RequestLog|qtp739264372-33] 172.17.0.4 - policyadmin [09/Sep/2024:17:02:56 +0000] "GET /metrics HTTP/1.1" 200 10647 "-" "Prometheus/2.54.1" policy-apex-pdp | [2024-09-09T17:03:01.971+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"942a4199-4630-4413-bf52-c2fe4335b00e","timestampMs":1725901381971,"name":"apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d","pdpGroup":"defaultGroup"} policy-apex-pdp | [2024-09-09T17:03:02.004+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"942a4199-4630-4413-bf52-c2fe4335b00e","timestampMs":1725901381971,"name":"apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d","pdpGroup":"defaultGroup"} policy-apex-pdp | [2024-09-09T17:03:02.009+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2024-09-09T17:03:02.178+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"source":"pap-483cf0fa-f0ba-41ce-a5b2-79a516dcc0d2","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"7ca9b1b0-e5c4-4bdc-af62-b9fd375bec42","timestampMs":1725901382119,"name":"apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-09-09T17:03:02.187+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher policy-apex-pdp | [2024-09-09T17:03:02.188+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"fe54868f-f1b7-42a5-b1e6-8219904f398f","timestampMs":1725901382187,"name":"apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d","pdpGroup":"defaultGroup"} policy-apex-pdp | [2024-09-09T17:03:02.189+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"7ca9b1b0-e5c4-4bdc-af62-b9fd375bec42","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"f216e26b-a14a-4d74-99b7-55c9b6fb9d61","timestampMs":1725901382189,"name":"apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-09-09T17:03:02.203+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"fe54868f-f1b7-42a5-b1e6-8219904f398f","timestampMs":1725901382187,"name":"apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d","pdpGroup":"defaultGroup"} policy-apex-pdp | [2024-09-09T17:03:02.203+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2024-09-09T17:03:02.206+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"7ca9b1b0-e5c4-4bdc-af62-b9fd375bec42","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"f216e26b-a14a-4d74-99b7-55c9b6fb9d61","timestampMs":1725901382189,"name":"apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-09-09T17:03:02.206+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2024-09-09T17:03:02.249+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"source":"pap-483cf0fa-f0ba-41ce-a5b2-79a516dcc0d2","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"d40eafe5-bdce-45c7-bf5a-7247145dbd40","timestampMs":1725901382120,"name":"apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-09-09T17:03:02.251+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"d40eafe5-bdce-45c7-bf5a-7247145dbd40","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"1549db70-2b3a-49c8-8e37-62ea4d21d38b","timestampMs":1725901382251,"name":"apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-09-09T17:03:02.266+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"d40eafe5-bdce-45c7-bf5a-7247145dbd40","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"1549db70-2b3a-49c8-8e37-62ea4d21d38b","timestampMs":1725901382251,"name":"apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-09-09T17:03:02.266+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2024-09-09T17:03:02.335+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"source":"pap-483cf0fa-f0ba-41ce-a5b2-79a516dcc0d2","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"4096c3e6-0e02-4432-89af-04183b792907","timestampMs":1725901382306,"name":"apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-09-09T17:03:02.337+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"4096c3e6-0e02-4432-89af-04183b792907","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"3db7acf7-6059-45c0-b6d6-885046e4cbe7","timestampMs":1725901382336,"name":"apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-09-09T17:03:02.351+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"4096c3e6-0e02-4432-89af-04183b792907","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"3db7acf7-6059-45c0-b6d6-885046e4cbe7","timestampMs":1725901382336,"name":"apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-09-09T17:03:02.351+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2024-09-09T17:03:56.084+00:00|INFO|RequestLog|qtp739264372-26] 172.17.0.4 - policyadmin [09/Sep/2024:17:03:56 +0000] "GET /metrics HTTP/1.1" 200 10650 "-" "Prometheus/2.54.1" =================================== ======== Logs from api ======== policy-api | Waiting for mariadb port 3306... policy-api | mariadb (172.17.0.3:3306) open policy-api | Waiting for policy-db-migrator port 6824... policy-api | policy-db-migrator (172.17.0.7:6824) open policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml policy-api | policy-api | . ____ _ __ _ _ policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / policy-api | =========|_|==============|___/=/_/_/_/ policy-api | :: Spring Boot :: (v3.1.10) policy-api | policy-api | [2024-09-09T17:02:19.208+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final policy-api | [2024-09-09T17:02:19.270+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.11 with PID 23 (/app/api.jar started by policy in /opt/app/policy/api/bin) policy-api | [2024-09-09T17:02:19.271+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" policy-api | [2024-09-09T17:02:21.113+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-api | [2024-09-09T17:02:21.194+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 71 ms. Found 6 JPA repository interfaces. policy-api | [2024-09-09T17:02:21.600+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler policy-api | [2024-09-09T17:02:21.601+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler policy-api | [2024-09-09T17:02:22.230+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) policy-api | [2024-09-09T17:02:22.240+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-api | [2024-09-09T17:02:22.242+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-api | [2024-09-09T17:02:22.242+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.19] policy-api | [2024-09-09T17:02:22.334+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext policy-api | [2024-09-09T17:02:22.334+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2995 ms policy-api | [2024-09-09T17:02:22.785+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-api | [2024-09-09T17:02:22.865+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.2.Final policy-api | [2024-09-09T17:02:22.912+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-api | [2024-09-09T17:02:23.196+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-api | [2024-09-09T17:02:23.227+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-api | [2024-09-09T17:02:23.313+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@312b34e3 policy-api | [2024-09-09T17:02:23.315+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-api | [2024-09-09T17:02:25.246+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-api | [2024-09-09T17:02:25.250+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-api | [2024-09-09T17:02:26.289+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml policy-api | [2024-09-09T17:02:27.087+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] policy-api | [2024-09-09T17:02:28.166+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-api | [2024-09-09T17:02:28.376+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@433e9108, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@70ac3a87, org.springframework.security.web.context.SecurityContextHolderFilter@1604ad0f, org.springframework.security.web.header.HeaderWriterFilter@519d1224, org.springframework.security.web.authentication.logout.LogoutFilter@28062dc2, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@605049be, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@1d93bd2a, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@6f54a7be, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@45bf64f7, org.springframework.security.web.access.ExceptionTranslationFilter@4ce824a7, org.springframework.security.web.access.intercept.AuthorizationFilter@6d67e03] policy-api | [2024-09-09T17:02:29.149+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' policy-api | [2024-09-09T17:02:29.254+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-api | [2024-09-09T17:02:29.284+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' policy-api | [2024-09-09T17:02:29.306+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 10.78 seconds (process running for 11.385) policy-api | [2024-09-09T17:02:39.926+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-api | [2024-09-09T17:02:39.926+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' policy-api | [2024-09-09T17:02:39.927+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 1 ms policy-api | [2024-09-09T17:03:18.335+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-4] ***** OrderedServiceImpl implementers: policy-api | [] =================================== ======== Logs from csit-tests ======== policy-csit | Invoking the robot tests from: pap-test.robot pap-slas.robot policy-csit | Run Robot test policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates policy-csit | -v POLICY_API_IP:policy-api:6969 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 policy-csit | -v APEX_IP:policy-apex-pdp:6969 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 policy-csit | -v KAFKA_IP:kafka:9092 policy-csit | -v PROMETHEUS_IP:prometheus:9090 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 policy-csit | -v TEMP_FOLDER:/tmp/distribution policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 policy-csit | -v CLAMP_K8S_TEST: policy-csit | Starting Robot test suites ... policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas.Pap-Test policy-csit | ============================================================================== policy-csit | LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | LoadNodeTemplates :: Create node templates in database using speci... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Healthcheck :: Verify policy pap health check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Consolidated Healthcheck :: Verify policy consolidated health check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Metrics :: Verify policy pap is exporting prometheus metrics | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | DeployPdpGroups :: Deploy policies in PdpGroups | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Pap-Test & Pap-Slas.Pap-Test | PASS | policy-csit | 22 tests, 22 passed, 0 failed policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas.Pap-Slas policy-csit | ============================================================================== policy-csit | WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Pap-Test & Pap-Slas.Pap-Slas | PASS | policy-csit | 8 tests, 8 passed, 0 failed policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas | PASS | policy-csit | 30 tests, 30 passed, 0 failed policy-csit | ============================================================================== policy-csit | Output: /tmp/results/output.xml policy-csit | Log: /tmp/results/log.html policy-csit | Report: /tmp/results/report.html policy-csit | RESULT: 0 =================================== ======== Logs from policy-db-migrator ======== policy-db-migrator | Waiting for mariadb port 3306... policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused policy-db-migrator | Connection to mariadb (172.17.0.3) 3306 port [tcp/mysql] succeeded! policy-db-migrator | 321 blocks policy-db-migrator | Preparing upgrade release version: 0800 policy-db-migrator | Preparing upgrade release version: 0900 policy-db-migrator | Preparing upgrade release version: 1000 policy-db-migrator | Preparing upgrade release version: 1100 policy-db-migrator | Preparing upgrade release version: 1200 policy-db-migrator | Preparing upgrade release version: 1300 policy-db-migrator | Done policy-db-migrator | name version policy-db-migrator | policyadmin 0 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 policy-db-migrator | upgrade: 0 -> 1300 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0450-pdpgroup.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0460-pdppolicystatus.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0470-pdp.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0480-pdpstatistics.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0500-pdpsubgroup.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0570-toscadatatype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0580-toscadatatypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0600-toscanodetemplate.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0610-toscanodetemplates.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0630-toscanodetype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0640-toscanodetypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0660-toscaparameter.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0670-toscapolicies.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0690-toscapolicy.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0700-toscapolicytype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0710-toscapolicytypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0730-toscaproperty.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0770-toscarequirement.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0780-toscarequirements.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0820-toscatrigger.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0100-pdp.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0130-pdpstatistics.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql policy-db-migrator | -------------- policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0150-pdpstatistics.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql policy-db-migrator | -------------- policy-db-migrator | UPDATE jpapdpstatistics_enginestats a policy-db-migrator | JOIN pdpstatistics b policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp policy-db-migrator | SET a.id = b.id policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0210-sequence.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0220-sequence.sql policy-db-migrator | -------------- policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0120-toscatrigger.sql policy-db-migrator | -------------- policy-db-migrator | DROP TABLE IF EXISTS toscatrigger policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0140-toscaparameter.sql policy-db-migrator | -------------- policy-db-migrator | DROP TABLE IF EXISTS toscaparameter policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0150-toscaproperty.sql policy-db-migrator | -------------- policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | DROP TABLE IF EXISTS toscaproperty policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0100-upgrade.sql policy-db-migrator | -------------- policy-db-migrator | select 'upgrade to 1100 completed' as msg policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | msg policy-db-migrator | upgrade to 1100 completed policy-db-migrator | policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-db-migrator | -------------- policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0120-audit_sequence.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0130-statistics_sequence.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | TRUNCATE TABLE sequence policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0100-pdpstatistics.sql policy-db-migrator | -------------- policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | DROP TABLE pdpstatistics policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-db-migrator | -------------- policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0120-statistics_sequence.sql policy-db-migrator | -------------- policy-db-migrator | DROP TABLE statistics_sequence policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policyadmin: OK: upgrade (1300) policy-db-migrator | name version policy-db-migrator | policyadmin 1300 policy-db-migrator | ID script operation from_version to_version tag success atTime policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:11 policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:11 policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:11 policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:11 policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:11 policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:11 policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:11 policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:11 policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:11 policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:11 policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:11 policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:11 policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:11 policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:11 policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:12 policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:12 policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:12 policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:12 policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:12 policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:12 policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:12 policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:12 policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:12 policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:12 policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:12 policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:12 policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:12 policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:12 policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:12 policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:12 policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:12 policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:12 policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:12 policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:12 policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:12 policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:12 policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:12 policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:12 policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:13 policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:13 policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:13 policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:13 policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:13 policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:13 policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:13 policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:13 policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:13 policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:13 policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:13 policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:13 policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:13 policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:13 policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:13 policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:13 policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:13 policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:13 policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:13 policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:13 policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:13 policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:13 policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:13 policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:14 policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:14 policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:14 policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:14 policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:14 policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:14 policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:14 policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:14 policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:14 policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:14 policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:14 policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:14 policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:14 policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:14 policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:14 policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:14 policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:14 policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:14 policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:14 policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:14 policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:14 policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:14 policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:15 policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:15 policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:15 policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:15 policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:15 policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:15 policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:15 policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:15 policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:15 policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:15 policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:15 policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:15 policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 0909241702110800u 1 2024-09-09 17:02:16 policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 0909241702110900u 1 2024-09-09 17:02:16 policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 0909241702110900u 1 2024-09-09 17:02:16 policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 0909241702110900u 1 2024-09-09 17:02:16 policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 0909241702110900u 1 2024-09-09 17:02:16 policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 0909241702110900u 1 2024-09-09 17:02:16 policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 0909241702110900u 1 2024-09-09 17:02:16 policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 0909241702110900u 1 2024-09-09 17:02:16 policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 0909241702110900u 1 2024-09-09 17:02:16 policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 0909241702110900u 1 2024-09-09 17:02:16 policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 0909241702110900u 1 2024-09-09 17:02:16 policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 0909241702110900u 1 2024-09-09 17:02:16 policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 0909241702110900u 1 2024-09-09 17:02:16 policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 0909241702110900u 1 2024-09-09 17:02:16 policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 0909241702111000u 1 2024-09-09 17:02:16 policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 0909241702111000u 1 2024-09-09 17:02:16 policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 0909241702111000u 1 2024-09-09 17:02:16 policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 0909241702111000u 1 2024-09-09 17:02:16 policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 0909241702111000u 1 2024-09-09 17:02:16 policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 0909241702111000u 1 2024-09-09 17:02:16 policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 0909241702111000u 1 2024-09-09 17:02:17 policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 0909241702111000u 1 2024-09-09 17:02:17 policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 0909241702111000u 1 2024-09-09 17:02:17 policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 0909241702111100u 1 2024-09-09 17:02:17 policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 0909241702111200u 1 2024-09-09 17:02:17 policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 0909241702111200u 1 2024-09-09 17:02:17 policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 0909241702111200u 1 2024-09-09 17:02:17 policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 0909241702111200u 1 2024-09-09 17:02:17 policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 0909241702111300u 1 2024-09-09 17:02:17 policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 0909241702111300u 1 2024-09-09 17:02:17 policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 0909241702111300u 1 2024-09-09 17:02:17 policy-db-migrator | policyadmin: OK @ 1300 =================================== ======== Logs from pap ======== policy-pap | Waiting for mariadb port 3306... policy-pap | mariadb (172.17.0.3:3306) open policy-pap | Waiting for kafka port 9092... policy-pap | kafka (172.17.0.6:9092) open policy-pap | Waiting for api port 6969... policy-pap | api (172.17.0.8:6969) open policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json policy-pap | policy-pap | . ____ _ __ _ _ policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / policy-pap | =========|_|==============|___/=/_/_/_/ policy-pap | :: Spring Boot :: (v3.1.10) policy-pap | policy-pap | [2024-09-09T17:02:31.466+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final policy-pap | [2024-09-09T17:02:31.526+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.11 with PID 34 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) policy-pap | [2024-09-09T17:02:31.527+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" policy-pap | [2024-09-09T17:02:33.501+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-pap | [2024-09-09T17:02:33.591+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 80 ms. Found 7 JPA repository interfaces. policy-pap | [2024-09-09T17:02:34.038+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler policy-pap | [2024-09-09T17:02:34.040+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler policy-pap | [2024-09-09T17:02:34.772+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) policy-pap | [2024-09-09T17:02:34.782+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-pap | [2024-09-09T17:02:34.784+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-pap | [2024-09-09T17:02:34.784+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.19] policy-pap | [2024-09-09T17:02:34.878+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext policy-pap | [2024-09-09T17:02:34.879+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3283 ms policy-pap | [2024-09-09T17:02:35.307+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-pap | [2024-09-09T17:02:35.356+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 5.6.15.Final policy-pap | [2024-09-09T17:02:35.683+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-pap | [2024-09-09T17:02:35.778+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@4ee5b2d9 policy-pap | [2024-09-09T17:02:35.780+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-pap | [2024-09-09T17:02:35.813+00:00|INFO|Dialect|main] HHH000400: Using dialect: org.hibernate.dialect.MariaDB106Dialect policy-pap | [2024-09-09T17:02:37.305+00:00|INFO|JtaPlatformInitiator|main] HHH000490: Using JtaPlatform implementation: [org.hibernate.engine.transaction.jta.platform.internal.NoJtaPlatform] policy-pap | [2024-09-09T17:02:37.315+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-pap | [2024-09-09T17:02:37.811+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository policy-pap | [2024-09-09T17:02:38.221+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository policy-pap | [2024-09-09T17:02:38.368+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository policy-pap | [2024-09-09T17:02:38.676+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-89b80bf9-d0e1-47c1-bb80-7e89913e9ace-1 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = 89b80bf9-d0e1-47c1-bb80-7e89913e9ace policy-pap | group.instance.id = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2024-09-09T17:02:38.818+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-pap | [2024-09-09T17:02:38.819+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-pap | [2024-09-09T17:02:38.819+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1725901358817 policy-pap | [2024-09-09T17:02:38.821+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-89b80bf9-d0e1-47c1-bb80-7e89913e9ace-1, groupId=89b80bf9-d0e1-47c1-bb80-7e89913e9ace] Subscribed to topic(s): policy-pdp-pap policy-pap | [2024-09-09T17:02:38.821+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-policy-pap-2 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = policy-pap policy-pap | group.instance.id = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2024-09-09T17:02:38.827+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-pap | [2024-09-09T17:02:38.827+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-pap | [2024-09-09T17:02:38.827+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1725901358827 policy-pap | [2024-09-09T17:02:38.827+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-pap | [2024-09-09T17:02:39.142+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json policy-pap | [2024-09-09T17:02:39.317+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-pap | [2024-09-09T17:02:39.572+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@21ba0d33, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@afb7b03, org.springframework.security.web.context.SecurityContextHolderFilter@76e2a621, org.springframework.security.web.header.HeaderWriterFilter@18b58c77, org.springframework.security.web.authentication.logout.LogoutFilter@6719f206, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@9825465, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@2e7517aa, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@76105ac0, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@4fd63c43, org.springframework.security.web.access.ExceptionTranslationFilter@5ccc971e, org.springframework.security.web.access.intercept.AuthorizationFilter@cd93621] policy-pap | [2024-09-09T17:02:40.376+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' policy-pap | [2024-09-09T17:02:40.467+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-pap | [2024-09-09T17:02:40.483+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' policy-pap | [2024-09-09T17:02:40.501+00:00|INFO|ServiceManager|main] Policy PAP starting policy-pap | [2024-09-09T17:02:40.501+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry policy-pap | [2024-09-09T17:02:40.502+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters policy-pap | [2024-09-09T17:02:40.502+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener policy-pap | [2024-09-09T17:02:40.502+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher policy-pap | [2024-09-09T17:02:40.503+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher policy-pap | [2024-09-09T17:02:40.503+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher policy-pap | [2024-09-09T17:02:40.504+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=89b80bf9-d0e1-47c1-bb80-7e89913e9ace, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@4e9695cf policy-pap | [2024-09-09T17:02:40.516+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=89b80bf9-d0e1-47c1-bb80-7e89913e9ace, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2024-09-09T17:02:40.516+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-89b80bf9-d0e1-47c1-bb80-7e89913e9ace-3 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = 89b80bf9-d0e1-47c1-bb80-7e89913e9ace policy-pap | group.instance.id = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2024-09-09T17:02:40.522+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-pap | [2024-09-09T17:02:40.522+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-pap | [2024-09-09T17:02:40.522+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1725901360522 policy-pap | [2024-09-09T17:02:40.522+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-89b80bf9-d0e1-47c1-bb80-7e89913e9ace-3, groupId=89b80bf9-d0e1-47c1-bb80-7e89913e9ace] Subscribed to topic(s): policy-pdp-pap policy-pap | [2024-09-09T17:02:40.523+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher policy-pap | [2024-09-09T17:02:40.523+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=67afa490-4187-4c1c-8b73-0079af70ad9a, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@77978658 policy-pap | [2024-09-09T17:02:40.523+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=67afa490-4187-4c1c-8b73-0079af70ad9a, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2024-09-09T17:02:40.523+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-policy-pap-4 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = policy-pap policy-pap | group.instance.id = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2024-09-09T17:02:40.528+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-pap | [2024-09-09T17:02:40.528+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-pap | [2024-09-09T17:02:40.528+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1725901360528 policy-pap | [2024-09-09T17:02:40.528+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-pap | [2024-09-09T17:02:40.529+00:00|INFO|ServiceManager|main] Policy PAP starting topics policy-pap | [2024-09-09T17:02:40.529+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=67afa490-4187-4c1c-8b73-0079af70ad9a, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2024-09-09T17:02:40.529+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=89b80bf9-d0e1-47c1-bb80-7e89913e9ace, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2024-09-09T17:02:40.529+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=e3171c62-9eb9-4fa8-a8c8-573c33fa995c, alive=false, publisher=null]]: starting policy-pap | [2024-09-09T17:02:40.542+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-pap | acks = -1 policy-pap | auto.include.jmx.reporter = true policy-pap | batch.size = 16384 policy-pap | bootstrap.servers = [kafka:9092] policy-pap | buffer.memory = 33554432 policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = producer-1 policy-pap | compression.type = none policy-pap | connections.max.idle.ms = 540000 policy-pap | delivery.timeout.ms = 120000 policy-pap | enable.idempotence = true policy-pap | interceptor.classes = [] policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | linger.ms = 0 policy-pap | max.block.ms = 60000 policy-pap | max.in.flight.requests.per.connection = 5 policy-pap | max.request.size = 1048576 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.max.idle.ms = 300000 policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partitioner.adaptive.partitioning.enable = true policy-pap | partitioner.availability.timeout.ms = 0 policy-pap | partitioner.class = null policy-pap | partitioner.ignore.keys = false policy-pap | receive.buffer.bytes = 32768 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retries = 2147483647 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | transaction.timeout.ms = 60000 policy-pap | transactional.id = null policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | policy-pap | [2024-09-09T17:02:40.552+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-pap | [2024-09-09T17:02:40.570+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-pap | [2024-09-09T17:02:40.570+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-pap | [2024-09-09T17:02:40.570+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1725901360570 policy-pap | [2024-09-09T17:02:40.571+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=e3171c62-9eb9-4fa8-a8c8-573c33fa995c, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-pap | [2024-09-09T17:02:40.571+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=847bd73d-f164-4f78-86ba-065c51bf4b5c, alive=false, publisher=null]]: starting policy-pap | [2024-09-09T17:02:40.571+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-pap | acks = -1 policy-pap | auto.include.jmx.reporter = true policy-pap | batch.size = 16384 policy-pap | bootstrap.servers = [kafka:9092] policy-pap | buffer.memory = 33554432 policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = producer-2 policy-pap | compression.type = none policy-pap | connections.max.idle.ms = 540000 policy-pap | delivery.timeout.ms = 120000 policy-pap | enable.idempotence = true policy-pap | interceptor.classes = [] policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | linger.ms = 0 policy-pap | max.block.ms = 60000 policy-pap | max.in.flight.requests.per.connection = 5 policy-pap | max.request.size = 1048576 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.max.idle.ms = 300000 policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partitioner.adaptive.partitioning.enable = true policy-pap | partitioner.availability.timeout.ms = 0 policy-pap | partitioner.class = null policy-pap | partitioner.ignore.keys = false policy-pap | receive.buffer.bytes = 32768 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retries = 2147483647 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | transaction.timeout.ms = 60000 policy-pap | transactional.id = null policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | policy-pap | [2024-09-09T17:02:40.572+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. policy-pap | [2024-09-09T17:02:40.578+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-pap | [2024-09-09T17:02:40.579+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-pap | [2024-09-09T17:02:40.579+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1725901360578 policy-pap | [2024-09-09T17:02:40.579+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=847bd73d-f164-4f78-86ba-065c51bf4b5c, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-pap | [2024-09-09T17:02:40.579+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator policy-pap | [2024-09-09T17:02:40.579+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher policy-pap | [2024-09-09T17:02:40.580+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher policy-pap | [2024-09-09T17:02:40.582+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers policy-pap | [2024-09-09T17:02:40.590+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers policy-pap | [2024-09-09T17:02:40.590+00:00|INFO|TimerManager|Thread-9] timer manager update started policy-pap | [2024-09-09T17:02:40.594+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock policy-pap | [2024-09-09T17:02:40.594+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests policy-pap | [2024-09-09T17:02:40.594+00:00|INFO|TimerManager|Thread-10] timer manager state-change started policy-pap | [2024-09-09T17:02:40.594+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer policy-pap | [2024-09-09T17:02:40.598+00:00|INFO|ServiceManager|main] Policy PAP started policy-pap | [2024-09-09T17:02:40.599+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 9.856 seconds (process running for 10.467) policy-pap | [2024-09-09T17:02:40.969+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: hj4zLbq6T_aCBjzSL5dtSA policy-pap | [2024-09-09T17:02:40.969+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: hj4zLbq6T_aCBjzSL5dtSA policy-pap | [2024-09-09T17:02:40.971+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-pap | [2024-09-09T17:02:40.971+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: hj4zLbq6T_aCBjzSL5dtSA policy-pap | [2024-09-09T17:02:41.010+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-89b80bf9-d0e1-47c1-bb80-7e89913e9ace-3, groupId=89b80bf9-d0e1-47c1-bb80-7e89913e9ace] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-09-09T17:02:41.010+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-89b80bf9-d0e1-47c1-bb80-7e89913e9ace-3, groupId=89b80bf9-d0e1-47c1-bb80-7e89913e9ace] Cluster ID: hj4zLbq6T_aCBjzSL5dtSA policy-pap | [2024-09-09T17:02:41.082+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-09-09T17:02:41.091+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 0 with epoch 0 policy-pap | [2024-09-09T17:02:41.092+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 1 with epoch 0 policy-pap | [2024-09-09T17:02:41.127+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-89b80bf9-d0e1-47c1-bb80-7e89913e9ace-3, groupId=89b80bf9-d0e1-47c1-bb80-7e89913e9ace] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-09-09T17:02:41.201+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-09-09T17:02:41.277+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-89b80bf9-d0e1-47c1-bb80-7e89913e9ace-3, groupId=89b80bf9-d0e1-47c1-bb80-7e89913e9ace] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-09-09T17:02:41.611+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-pap | [2024-09-09T17:02:41.612+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' policy-pap | [2024-09-09T17:02:41.614+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 2 ms policy-pap | [2024-09-09T17:02:42.022+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-89b80bf9-d0e1-47c1-bb80-7e89913e9ace-3, groupId=89b80bf9-d0e1-47c1-bb80-7e89913e9ace] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-pap | [2024-09-09T17:02:42.028+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-89b80bf9-d0e1-47c1-bb80-7e89913e9ace-3, groupId=89b80bf9-d0e1-47c1-bb80-7e89913e9ace] (Re-)joining group policy-pap | [2024-09-09T17:02:42.045+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-pap | [2024-09-09T17:02:42.047+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-pap | [2024-09-09T17:02:42.059+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-3f9370f4-1336-4690-a98f-ad84b779dfe9 policy-pap | [2024-09-09T17:02:42.059+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-pap | [2024-09-09T17:02:42.060+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-pap | [2024-09-09T17:02:42.060+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-89b80bf9-d0e1-47c1-bb80-7e89913e9ace-3, groupId=89b80bf9-d0e1-47c1-bb80-7e89913e9ace] Request joining group due to: need to re-join with the given member-id: consumer-89b80bf9-d0e1-47c1-bb80-7e89913e9ace-3-74a3f7ac-6112-4af5-9311-f57b67b8c823 policy-pap | [2024-09-09T17:02:42.060+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-89b80bf9-d0e1-47c1-bb80-7e89913e9ace-3, groupId=89b80bf9-d0e1-47c1-bb80-7e89913e9ace] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-pap | [2024-09-09T17:02:42.060+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-89b80bf9-d0e1-47c1-bb80-7e89913e9ace-3, groupId=89b80bf9-d0e1-47c1-bb80-7e89913e9ace] (Re-)joining group policy-pap | [2024-09-09T17:02:45.084+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-89b80bf9-d0e1-47c1-bb80-7e89913e9ace-3, groupId=89b80bf9-d0e1-47c1-bb80-7e89913e9ace] Successfully joined group with generation Generation{generationId=1, memberId='consumer-89b80bf9-d0e1-47c1-bb80-7e89913e9ace-3-74a3f7ac-6112-4af5-9311-f57b67b8c823', protocol='range'} policy-pap | [2024-09-09T17:02:45.085+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-3f9370f4-1336-4690-a98f-ad84b779dfe9', protocol='range'} policy-pap | [2024-09-09T17:02:45.094+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-89b80bf9-d0e1-47c1-bb80-7e89913e9ace-3, groupId=89b80bf9-d0e1-47c1-bb80-7e89913e9ace] Finished assignment for group at generation 1: {consumer-89b80bf9-d0e1-47c1-bb80-7e89913e9ace-3-74a3f7ac-6112-4af5-9311-f57b67b8c823=Assignment(partitions=[policy-pdp-pap-0])} policy-pap | [2024-09-09T17:02:45.094+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-3f9370f4-1336-4690-a98f-ad84b779dfe9=Assignment(partitions=[policy-pdp-pap-0])} policy-pap | [2024-09-09T17:02:45.119+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-3f9370f4-1336-4690-a98f-ad84b779dfe9', protocol='range'} policy-pap | [2024-09-09T17:02:45.120+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-89b80bf9-d0e1-47c1-bb80-7e89913e9ace-3, groupId=89b80bf9-d0e1-47c1-bb80-7e89913e9ace] Successfully synced group in generation Generation{generationId=1, memberId='consumer-89b80bf9-d0e1-47c1-bb80-7e89913e9ace-3-74a3f7ac-6112-4af5-9311-f57b67b8c823', protocol='range'} policy-pap | [2024-09-09T17:02:45.120+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-pap | [2024-09-09T17:02:45.120+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-89b80bf9-d0e1-47c1-bb80-7e89913e9ace-3, groupId=89b80bf9-d0e1-47c1-bb80-7e89913e9ace] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-pap | [2024-09-09T17:02:45.127+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 policy-pap | [2024-09-09T17:02:45.127+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-89b80bf9-d0e1-47c1-bb80-7e89913e9ace-3, groupId=89b80bf9-d0e1-47c1-bb80-7e89913e9ace] Adding newly assigned partitions: policy-pdp-pap-0 policy-pap | [2024-09-09T17:02:45.149+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 policy-pap | [2024-09-09T17:02:45.150+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-89b80bf9-d0e1-47c1-bb80-7e89913e9ace-3, groupId=89b80bf9-d0e1-47c1-bb80-7e89913e9ace] Found no committed offset for partition policy-pdp-pap-0 policy-pap | [2024-09-09T17:02:45.171+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-89b80bf9-d0e1-47c1-bb80-7e89913e9ace-3, groupId=89b80bf9-d0e1-47c1-bb80-7e89913e9ace] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | [2024-09-09T17:02:45.172+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | [2024-09-09T17:03:02.005+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-heartbeat] ***** OrderedServiceImpl implementers: policy-pap | [] policy-pap | [2024-09-09T17:03:02.006+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"942a4199-4630-4413-bf52-c2fe4335b00e","timestampMs":1725901381971,"name":"apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d","pdpGroup":"defaultGroup"} policy-pap | [2024-09-09T17:03:02.007+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"942a4199-4630-4413-bf52-c2fe4335b00e","timestampMs":1725901381971,"name":"apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d","pdpGroup":"defaultGroup"} policy-pap | [2024-09-09T17:03:02.017+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-pap | [2024-09-09T17:03:02.138+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d PdpUpdate starting policy-pap | [2024-09-09T17:03:02.138+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d PdpUpdate starting listener policy-pap | [2024-09-09T17:03:02.139+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d PdpUpdate starting timer policy-pap | [2024-09-09T17:03:02.139+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=7ca9b1b0-e5c4-4bdc-af62-b9fd375bec42, expireMs=1725901412139] policy-pap | [2024-09-09T17:03:02.140+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d PdpUpdate starting enqueue policy-pap | [2024-09-09T17:03:02.140+00:00|INFO|TimerManager|Thread-9] update timer waiting 29999ms Timer [name=7ca9b1b0-e5c4-4bdc-af62-b9fd375bec42, expireMs=1725901412139] policy-pap | [2024-09-09T17:03:02.140+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d PdpUpdate started policy-pap | [2024-09-09T17:03:02.143+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-483cf0fa-f0ba-41ce-a5b2-79a516dcc0d2","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"7ca9b1b0-e5c4-4bdc-af62-b9fd375bec42","timestampMs":1725901382119,"name":"apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-09-09T17:03:02.179+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-483cf0fa-f0ba-41ce-a5b2-79a516dcc0d2","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"7ca9b1b0-e5c4-4bdc-af62-b9fd375bec42","timestampMs":1725901382119,"name":"apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-09-09T17:03:02.181+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-483cf0fa-f0ba-41ce-a5b2-79a516dcc0d2","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"7ca9b1b0-e5c4-4bdc-af62-b9fd375bec42","timestampMs":1725901382119,"name":"apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-09-09T17:03:02.193+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2024-09-09T17:03:02.194+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2024-09-09T17:03:02.200+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"fe54868f-f1b7-42a5-b1e6-8219904f398f","timestampMs":1725901382187,"name":"apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d","pdpGroup":"defaultGroup"} policy-pap | [2024-09-09T17:03:02.206+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"fe54868f-f1b7-42a5-b1e6-8219904f398f","timestampMs":1725901382187,"name":"apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d","pdpGroup":"defaultGroup"} policy-pap | [2024-09-09T17:03:02.208+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-pap | [2024-09-09T17:03:02.208+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"7ca9b1b0-e5c4-4bdc-af62-b9fd375bec42","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"f216e26b-a14a-4d74-99b7-55c9b6fb9d61","timestampMs":1725901382189,"name":"apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-09-09T17:03:02.231+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d PdpUpdate stopping policy-pap | [2024-09-09T17:03:02.231+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d PdpUpdate stopping enqueue policy-pap | [2024-09-09T17:03:02.231+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d PdpUpdate stopping timer policy-pap | [2024-09-09T17:03:02.231+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=7ca9b1b0-e5c4-4bdc-af62-b9fd375bec42, expireMs=1725901412139] policy-pap | [2024-09-09T17:03:02.231+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d PdpUpdate stopping listener policy-pap | [2024-09-09T17:03:02.231+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d PdpUpdate stopped policy-pap | [2024-09-09T17:03:02.234+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"7ca9b1b0-e5c4-4bdc-af62-b9fd375bec42","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"f216e26b-a14a-4d74-99b7-55c9b6fb9d61","timestampMs":1725901382189,"name":"apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-09-09T17:03:02.235+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 7ca9b1b0-e5c4-4bdc-af62-b9fd375bec42 policy-pap | [2024-09-09T17:03:02.239+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d PdpUpdate successful policy-pap | [2024-09-09T17:03:02.240+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d start publishing next request policy-pap | [2024-09-09T17:03:02.240+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d PdpStateChange starting policy-pap | [2024-09-09T17:03:02.240+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d PdpStateChange starting listener policy-pap | [2024-09-09T17:03:02.240+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d PdpStateChange starting timer policy-pap | [2024-09-09T17:03:02.240+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=d40eafe5-bdce-45c7-bf5a-7247145dbd40, expireMs=1725901412240] policy-pap | [2024-09-09T17:03:02.241+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 29999ms Timer [name=d40eafe5-bdce-45c7-bf5a-7247145dbd40, expireMs=1725901412240] policy-pap | [2024-09-09T17:03:02.241+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d PdpStateChange starting enqueue policy-pap | [2024-09-09T17:03:02.241+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d PdpStateChange started policy-pap | [2024-09-09T17:03:02.241+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-483cf0fa-f0ba-41ce-a5b2-79a516dcc0d2","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"d40eafe5-bdce-45c7-bf5a-7247145dbd40","timestampMs":1725901382120,"name":"apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-09-09T17:03:02.253+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-483cf0fa-f0ba-41ce-a5b2-79a516dcc0d2","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"d40eafe5-bdce-45c7-bf5a-7247145dbd40","timestampMs":1725901382120,"name":"apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-09-09T17:03:02.254+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE policy-pap | [2024-09-09T17:03:02.265+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"d40eafe5-bdce-45c7-bf5a-7247145dbd40","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"1549db70-2b3a-49c8-8e37-62ea4d21d38b","timestampMs":1725901382251,"name":"apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-09-09T17:03:02.266+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id d40eafe5-bdce-45c7-bf5a-7247145dbd40 policy-pap | [2024-09-09T17:03:02.317+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-483cf0fa-f0ba-41ce-a5b2-79a516dcc0d2","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"d40eafe5-bdce-45c7-bf5a-7247145dbd40","timestampMs":1725901382120,"name":"apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-09-09T17:03:02.317+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE policy-pap | [2024-09-09T17:03:02.321+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"d40eafe5-bdce-45c7-bf5a-7247145dbd40","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"1549db70-2b3a-49c8-8e37-62ea4d21d38b","timestampMs":1725901382251,"name":"apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-09-09T17:03:02.322+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d PdpStateChange stopping policy-pap | [2024-09-09T17:03:02.322+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d PdpStateChange stopping enqueue policy-pap | [2024-09-09T17:03:02.322+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d PdpStateChange stopping timer policy-pap | [2024-09-09T17:03:02.322+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=d40eafe5-bdce-45c7-bf5a-7247145dbd40, expireMs=1725901412240] policy-pap | [2024-09-09T17:03:02.323+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d PdpStateChange stopping listener policy-pap | [2024-09-09T17:03:02.323+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d PdpStateChange stopped policy-pap | [2024-09-09T17:03:02.323+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d PdpStateChange successful policy-pap | [2024-09-09T17:03:02.323+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d start publishing next request policy-pap | [2024-09-09T17:03:02.323+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d PdpUpdate starting policy-pap | [2024-09-09T17:03:02.323+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d PdpUpdate starting listener policy-pap | [2024-09-09T17:03:02.323+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d PdpUpdate starting timer policy-pap | [2024-09-09T17:03:02.324+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=4096c3e6-0e02-4432-89af-04183b792907, expireMs=1725901412324] policy-pap | [2024-09-09T17:03:02.324+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d PdpUpdate starting enqueue policy-pap | [2024-09-09T17:03:02.324+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d PdpUpdate started policy-pap | [2024-09-09T17:03:02.324+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-483cf0fa-f0ba-41ce-a5b2-79a516dcc0d2","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"4096c3e6-0e02-4432-89af-04183b792907","timestampMs":1725901382306,"name":"apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-09-09T17:03:02.341+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-483cf0fa-f0ba-41ce-a5b2-79a516dcc0d2","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"4096c3e6-0e02-4432-89af-04183b792907","timestampMs":1725901382306,"name":"apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-09-09T17:03:02.342+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2024-09-09T17:03:02.344+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-483cf0fa-f0ba-41ce-a5b2-79a516dcc0d2","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"4096c3e6-0e02-4432-89af-04183b792907","timestampMs":1725901382306,"name":"apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-09-09T17:03:02.345+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2024-09-09T17:03:02.350+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"4096c3e6-0e02-4432-89af-04183b792907","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"3db7acf7-6059-45c0-b6d6-885046e4cbe7","timestampMs":1725901382336,"name":"apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-09-09T17:03:02.351+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d PdpUpdate stopping policy-pap | [2024-09-09T17:03:02.351+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d PdpUpdate stopping enqueue policy-pap | [2024-09-09T17:03:02.351+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d PdpUpdate stopping timer policy-pap | [2024-09-09T17:03:02.351+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=4096c3e6-0e02-4432-89af-04183b792907, expireMs=1725901412324] policy-pap | [2024-09-09T17:03:02.351+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d PdpUpdate stopping listener policy-pap | [2024-09-09T17:03:02.351+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d PdpUpdate stopped policy-pap | [2024-09-09T17:03:02.351+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"4096c3e6-0e02-4432-89af-04183b792907","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"3db7acf7-6059-45c0-b6d6-885046e4cbe7","timestampMs":1725901382336,"name":"apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-09-09T17:03:02.351+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 4096c3e6-0e02-4432-89af-04183b792907 policy-pap | [2024-09-09T17:03:02.354+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d PdpUpdate successful policy-pap | [2024-09-09T17:03:02.354+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-83423981-15dc-4a5e-b864-e52ab7fc6d7d has no more requests policy-pap | [2024-09-09T17:03:32.140+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=7ca9b1b0-e5c4-4bdc-af62-b9fd375bec42, expireMs=1725901412139] policy-pap | [2024-09-09T17:03:32.240+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=d40eafe5-bdce-45c7-bf5a-7247145dbd40, expireMs=1725901412240] policy-pap | [2024-09-09T17:03:40.092+00:00|WARN|NonInjectionManager|pool-2-thread-1] Falling back to injection-less client. policy-pap | [2024-09-09T17:03:40.143+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls policy-pap | [2024-09-09T17:03:40.154+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls policy-pap | [2024-09-09T17:03:40.155+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls policy-pap | [2024-09-09T17:03:40.562+00:00|INFO|SessionData|http-nio-6969-exec-8] unknown group testGroup policy-pap | [2024-09-09T17:03:41.076+00:00|INFO|SessionData|http-nio-6969-exec-8] create cached group testGroup policy-pap | [2024-09-09T17:03:41.077+00:00|INFO|SessionData|http-nio-6969-exec-8] creating DB group testGroup policy-pap | [2024-09-09T17:03:41.630+00:00|INFO|SessionData|http-nio-6969-exec-2] cache group testGroup policy-pap | [2024-09-09T17:03:41.834+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-2] Registering a deploy for policy onap.restart.tca 1.0.0 policy-pap | [2024-09-09T17:03:41.934+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-2] Registering a deploy for policy operational.apex.decisionMaker 1.0.0 policy-pap | [2024-09-09T17:03:41.934+00:00|INFO|SessionData|http-nio-6969-exec-2] update cached group testGroup policy-pap | [2024-09-09T17:03:41.935+00:00|INFO|SessionData|http-nio-6969-exec-2] updating DB group testGroup policy-pap | [2024-09-09T17:03:41.949+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-2] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2024-09-09T17:03:41Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=DEPLOYMENT, timestamp=2024-09-09T17:03:41Z, user=policyadmin)] policy-pap | [2024-09-09T17:03:42.576+00:00|INFO|SessionData|http-nio-6969-exec-7] cache group testGroup policy-pap | [2024-09-09T17:03:42.577+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-7] remove policy onap.restart.tca 1.0.0 from subgroup testGroup pdpTypeA count=0 policy-pap | [2024-09-09T17:03:42.577+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-7] Registering an undeploy for policy onap.restart.tca 1.0.0 policy-pap | [2024-09-09T17:03:42.577+00:00|INFO|SessionData|http-nio-6969-exec-7] update cached group testGroup policy-pap | [2024-09-09T17:03:42.577+00:00|INFO|SessionData|http-nio-6969-exec-7] updating DB group testGroup policy-pap | [2024-09-09T17:03:42.589+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-7] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2024-09-09T17:03:42Z, user=policyadmin)] policy-pap | [2024-09-09T17:03:42.889+00:00|INFO|SessionData|http-nio-6969-exec-8] cache group defaultGroup policy-pap | [2024-09-09T17:03:42.889+00:00|INFO|SessionData|http-nio-6969-exec-8] cache group testGroup policy-pap | [2024-09-09T17:03:42.889+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-8] remove policy operational.apex.decisionMaker 1.0.0 from subgroup testGroup pdpTypeC count=0 policy-pap | [2024-09-09T17:03:42.889+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-8] Registering an undeploy for policy operational.apex.decisionMaker 1.0.0 policy-pap | [2024-09-09T17:03:42.889+00:00|INFO|SessionData|http-nio-6969-exec-8] update cached group testGroup policy-pap | [2024-09-09T17:03:42.889+00:00|INFO|SessionData|http-nio-6969-exec-8] updating DB group testGroup policy-pap | [2024-09-09T17:03:42.899+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-8] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=UNDEPLOYMENT, timestamp=2024-09-09T17:03:42Z, user=policyadmin)] policy-pap | [2024-09-09T17:03:43.428+00:00|INFO|SessionData|http-nio-6969-exec-2] cache group testGroup policy-pap | [2024-09-09T17:03:43.429+00:00|INFO|SessionData|http-nio-6969-exec-2] deleting DB group testGroup policy-pap | [2024-09-09T17:04:40.595+00:00|INFO|PdpModifyRequestMap|pool-3-thread-1] check for PDP records older than 360000ms =================================== ======== Logs from prometheus ======== prometheus | ts=2024-09-09T17:02:07.106Z caller=main.go:601 level=info msg="No time or size retention was set so using the default time retention" duration=15d prometheus | ts=2024-09-09T17:02:07.106Z caller=main.go:645 level=info msg="Starting Prometheus Server" mode=server version="(version=2.54.1, branch=HEAD, revision=e6cfa720fbe6280153fab13090a483dbd40bece3)" prometheus | ts=2024-09-09T17:02:07.107Z caller=main.go:650 level=info build_context="(go=go1.22.6, platform=linux/amd64, user=root@812ffd741951, date=20240827-10:56:41, tags=netgo,builtinassets,stringlabels)" prometheus | ts=2024-09-09T17:02:07.107Z caller=main.go:651 level=info host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" prometheus | ts=2024-09-09T17:02:07.107Z caller=main.go:652 level=info fd_limits="(soft=1048576, hard=1048576)" prometheus | ts=2024-09-09T17:02:07.107Z caller=main.go:653 level=info vm_limits="(soft=unlimited, hard=unlimited)" prometheus | ts=2024-09-09T17:02:07.115Z caller=web.go:571 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090 prometheus | ts=2024-09-09T17:02:07.115Z caller=main.go:1160 level=info msg="Starting TSDB ..." prometheus | ts=2024-09-09T17:02:07.117Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9090 prometheus | ts=2024-09-09T17:02:07.117Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9090 prometheus | ts=2024-09-09T17:02:07.118Z caller=head.go:626 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" prometheus | ts=2024-09-09T17:02:07.118Z caller=head.go:713 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=2.06µs prometheus | ts=2024-09-09T17:02:07.118Z caller=head.go:721 level=info component=tsdb msg="Replaying WAL, this may take a while" prometheus | ts=2024-09-09T17:02:07.119Z caller=head.go:793 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 prometheus | ts=2024-09-09T17:02:07.119Z caller=head.go:830 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=19.491µs wal_replay_duration=312.755µs wbl_replay_duration=380ns chunk_snapshot_load_duration=0s mmap_chunk_replay_duration=2.06µs total_replay_duration=357.506µs prometheus | ts=2024-09-09T17:02:07.121Z caller=main.go:1181 level=info fs_type=EXT4_SUPER_MAGIC prometheus | ts=2024-09-09T17:02:07.121Z caller=main.go:1184 level=info msg="TSDB started" prometheus | ts=2024-09-09T17:02:07.121Z caller=main.go:1367 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml prometheus | ts=2024-09-09T17:02:07.122Z caller=main.go:1404 level=info msg="updated GOGC" old=100 new=75 prometheus | ts=2024-09-09T17:02:07.122Z caller=main.go:1415 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=1.382414ms db_storage=1.8µs remote_storage=2.491µs web_handler=290ns query_engine=4.43µs scrape=220.533µs scrape_sd=118.992µs notify=22.26µs notify_sd=449.288µs rules=5.25µs tracing=14.14µs prometheus | ts=2024-09-09T17:02:07.122Z caller=main.go:1145 level=info msg="Server is ready to receive web requests." prometheus | ts=2024-09-09T17:02:07.122Z caller=manager.go:164 level=info component="rule manager" msg="Starting rule manager..." =================================== ======== Logs from simulator ======== simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json simulator | overriding logback.xml simulator | 2024-09-09 17:02:03,421 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json simulator | 2024-09-09 17:02:03,477 INFO org.onap.policy.models.simulators starting simulator | 2024-09-09 17:02:03,477 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties simulator | 2024-09-09 17:02:03,669 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION simulator | 2024-09-09 17:02:03,670 INFO org.onap.policy.models.simulators starting A&AI simulator simulator | 2024-09-09 17:02:03,789 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START simulator | 2024-09-09 17:02:03,799 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-09-09 17:02:03,802 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-09-09 17:02:03,807 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 simulator | 2024-09-09 17:02:03,875 INFO Session workerName=node0 simulator | 2024-09-09 17:02:04,408 INFO Using GSON for REST calls simulator | 2024-09-09 17:02:04,499 INFO Started o.e.j.s.ServletContextHandler@3f6db3fb{/,null,AVAILABLE} simulator | 2024-09-09 17:02:04,507 INFO Started A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} simulator | 2024-09-09 17:02:04,514 INFO Started Server@64a8c844{STARTING}[11.0.20,sto=0] @1564ms simulator | 2024-09-09 17:02:04,514 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,AVAILABLE}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4288 ms. simulator | 2024-09-09 17:02:04,519 INFO org.onap.policy.models.simulators starting SDNC simulator simulator | 2024-09-09 17:02:04,525 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START simulator | 2024-09-09 17:02:04,525 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-09-09 17:02:04,529 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-09-09 17:02:04,531 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 simulator | 2024-09-09 17:02:04,547 INFO Session workerName=node0 simulator | 2024-09-09 17:02:04,604 INFO Using GSON for REST calls simulator | 2024-09-09 17:02:04,613 INFO Started o.e.j.s.ServletContextHandler@b70da4c{/,null,AVAILABLE} simulator | 2024-09-09 17:02:04,614 INFO Started SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} simulator | 2024-09-09 17:02:04,614 INFO Started Server@70efb718{STARTING}[11.0.20,sto=0] @1664ms simulator | 2024-09-09 17:02:04,614 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,AVAILABLE}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4915 ms. simulator | 2024-09-09 17:02:04,616 INFO org.onap.policy.models.simulators starting SO simulator simulator | 2024-09-09 17:02:04,620 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START simulator | 2024-09-09 17:02:04,622 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-09-09 17:02:04,624 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-09-09 17:02:04,625 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 simulator | 2024-09-09 17:02:04,628 INFO Session workerName=node0 simulator | 2024-09-09 17:02:04,681 INFO Using GSON for REST calls simulator | 2024-09-09 17:02:04,695 INFO Started o.e.j.s.ServletContextHandler@696f0212{/,null,AVAILABLE} simulator | 2024-09-09 17:02:04,696 INFO Started SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} simulator | 2024-09-09 17:02:04,696 INFO Started Server@b7838a9{STARTING}[11.0.20,sto=0] @1746ms simulator | 2024-09-09 17:02:04,696 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,AVAILABLE}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4927 ms. simulator | 2024-09-09 17:02:04,697 INFO org.onap.policy.models.simulators starting VFC simulator simulator | 2024-09-09 17:02:04,699 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START simulator | 2024-09-09 17:02:04,700 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-09-09 17:02:04,700 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-09-09 17:02:04,701 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 simulator | 2024-09-09 17:02:04,703 INFO Session workerName=node0 simulator | 2024-09-09 17:02:04,741 INFO Using GSON for REST calls simulator | 2024-09-09 17:02:04,750 INFO Started o.e.j.s.ServletContextHandler@19553973{/,null,AVAILABLE} simulator | 2024-09-09 17:02:04,751 INFO Started VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670} simulator | 2024-09-09 17:02:04,751 INFO Started Server@f478a81{STARTING}[11.0.20,sto=0] @1801ms simulator | 2024-09-09 17:02:04,751 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,AVAILABLE}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4949 ms. simulator | 2024-09-09 17:02:04,752 INFO org.onap.policy.models.simulators started =================================== ======== Logs from zookeeper ======== zookeeper | ===> User zookeeper | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) zookeeper | ===> Configuring ... zookeeper | ===> Running preflight checks ... zookeeper | ===> Check if /var/lib/zookeeper/data is writable ... zookeeper | ===> Check if /var/lib/zookeeper/log is writable ... zookeeper | ===> Launching ... zookeeper | ===> Launching zookeeper ... zookeeper | [2024-09-09 17:02:05,388] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-09-09 17:02:05,390] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-09-09 17:02:05,390] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-09-09 17:02:05,390] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-09-09 17:02:05,390] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-09-09 17:02:05,391] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2024-09-09 17:02:05,391] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2024-09-09 17:02:05,391] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2024-09-09 17:02:05,391] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) zookeeper | [2024-09-09 17:02:05,392] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) zookeeper | [2024-09-09 17:02:05,393] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-09-09 17:02:05,393] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-09-09 17:02:05,393] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-09-09 17:02:05,393] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-09-09 17:02:05,393] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-09-09 17:02:05,393] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) zookeeper | [2024-09-09 17:02:05,403] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@75c072cb (org.apache.zookeeper.server.ServerMetrics) zookeeper | [2024-09-09 17:02:05,405] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper | [2024-09-09 17:02:05,405] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper | [2024-09-09 17:02:05,407] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2024-09-09 17:02:05,414] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-09-09 17:02:05,415] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-09-09 17:02:05,415] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-09-09 17:02:05,415] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-09-09 17:02:05,415] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-09-09 17:02:05,415] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-09-09 17:02:05,415] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-09-09 17:02:05,415] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-09-09 17:02:05,415] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-09-09 17:02:05,415] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-09-09 17:02:05,416] INFO Server environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-09-09 17:02:05,416] INFO Server environment:host.name=zookeeper (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-09-09 17:02:05,416] INFO Server environment:java.version=17.0.12 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-09-09 17:02:05,416] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-09-09 17:02:05,416] INFO Server environment:java.home=/usr/lib/jvm/java-17-zulu-openjdk-ca (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-09-09 17:02:05,416] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/connect-transforms-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/protobuf-java-3.23.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-mirror-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/netty-common-4.1.110.Final.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.16.2.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.110.Final.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/connect-api-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.16.2.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.6-3.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.16.2.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.110.Final.jar:/usr/bin/../share/java/kafka/connect-runtime-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/kafka-shell-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.110.Final.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/scala-library-2.13.12.jar:/usr/bin/../share/java/kafka/trogdor-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.110.Final.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.110.Final.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.110.Final.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.110.Final.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.110.Final.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.12.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/kafka-raft-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-tools-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-server-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-databind-2.16.2.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/opentelemetry-proto-1.0.0-alpha.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.16.2.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.16.2.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-core-2.16.2.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/kafka-clients-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/connect-json-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-09-09 17:02:05,416] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-09-09 17:02:05,416] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-09-09 17:02:05,416] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-09-09 17:02:05,416] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-09-09 17:02:05,416] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-09-09 17:02:05,416] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-09-09 17:02:05,417] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-09-09 17:02:05,417] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-09-09 17:02:05,417] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-09-09 17:02:05,417] INFO Server environment:os.memory.free=495MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-09-09 17:02:05,417] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-09-09 17:02:05,417] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-09-09 17:02:05,417] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-09-09 17:02:05,417] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-09-09 17:02:05,417] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-09-09 17:02:05,417] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-09-09 17:02:05,417] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-09-09 17:02:05,417] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-09-09 17:02:05,417] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-09-09 17:02:05,418] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) zookeeper | [2024-09-09 17:02:05,419] INFO minSessionTimeout set to 6000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-09-09 17:02:05,419] INFO maxSessionTimeout set to 60000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-09-09 17:02:05,420] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper | [2024-09-09 17:02:05,420] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper | [2024-09-09 17:02:05,420] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2024-09-09 17:02:05,420] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2024-09-09 17:02:05,421] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2024-09-09 17:02:05,421] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2024-09-09 17:02:05,421] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2024-09-09 17:02:05,421] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2024-09-09 17:02:05,423] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-09-09 17:02:05,423] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-09-09 17:02:05,423] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) zookeeper | [2024-09-09 17:02:05,423] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) zookeeper | [2024-09-09 17:02:05,423] INFO Created server with tickTime 3000 ms minSessionTimeout 6000 ms maxSessionTimeout 60000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-09-09 17:02:05,443] INFO Logging initialized @463ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) zookeeper | [2024-09-09 17:02:05,493] WARN o.e.j.s.ServletContextHandler@f5958c9{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2024-09-09 17:02:05,493] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2024-09-09 17:02:05,508] INFO jetty-9.4.54.v20240208; built: 2024-02-08T19:42:39.027Z; git: cef3fbd6d736a21e7d541a5db490381d95a2047d; jvm 17.0.12+7-LTS (org.eclipse.jetty.server.Server) zookeeper | [2024-09-09 17:02:05,532] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) zookeeper | [2024-09-09 17:02:05,532] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) zookeeper | [2024-09-09 17:02:05,533] INFO node0 Scavenging every 660000ms (org.eclipse.jetty.server.session) zookeeper | [2024-09-09 17:02:05,544] WARN ServletContext@o.e.j.s.ServletContextHandler@f5958c9{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) zookeeper | [2024-09-09 17:02:05,553] INFO Started o.e.j.s.ServletContextHandler@f5958c9{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2024-09-09 17:02:05,564] INFO Started ServerConnector@436813f3{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) zookeeper | [2024-09-09 17:02:05,564] INFO Started @588ms (org.eclipse.jetty.server.Server) zookeeper | [2024-09-09 17:02:05,564] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) zookeeper | [2024-09-09 17:02:05,567] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper | [2024-09-09 17:02:05,568] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper | [2024-09-09 17:02:05,569] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper | [2024-09-09 17:02:05,570] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper | [2024-09-09 17:02:05,582] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper | [2024-09-09 17:02:05,583] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper | [2024-09-09 17:02:05,583] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2024-09-09 17:02:05,583] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2024-09-09 17:02:05,588] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) zookeeper | [2024-09-09 17:02:05,588] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2024-09-09 17:02:05,591] INFO Snapshot loaded in 8 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2024-09-09 17:02:05,591] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2024-09-09 17:02:05,592] INFO Snapshot taken in 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-09-09 17:02:05,598] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) zookeeper | [2024-09-09 17:02:05,600] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) zookeeper | [2024-09-09 17:02:05,610] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) zookeeper | [2024-09-09 17:02:05,611] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) zookeeper | [2024-09-09 17:02:06,700] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) =================================== Tearing down containers... Container policy-csit Stopping Container policy-apex-pdp Stopping Container grafana Stopping Container policy-csit Stopped Container policy-csit Removing Container policy-csit Removed Container grafana Stopped Container grafana Removing Container grafana Removed Container prometheus Stopping Container prometheus Stopped Container prometheus Removing Container prometheus Removed Container policy-apex-pdp Stopped Container policy-apex-pdp Removing Container policy-apex-pdp Removed Container simulator Stopping Container policy-pap Stopping Container simulator Stopped Container simulator Removing Container simulator Removed Container policy-pap Stopped Container policy-pap Removing Container policy-pap Removed Container policy-api Stopping Container kafka Stopping Container kafka Stopped Container kafka Removing Container kafka Removed Container zookeeper Stopping Container zookeeper Stopped Container zookeeper Removing Container zookeeper Removed Container policy-api Stopped Container policy-api Removing Container policy-api Removed Container policy-db-migrator Stopping Container policy-db-migrator Stopped Container policy-db-migrator Removing Container policy-db-migrator Removed Container mariadb Stopping Container mariadb Stopped Container mariadb Removing Container mariadb Removed Network compose_default Removing Network compose_default Removed $ ssh-agent -k unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 2127 killed; [ssh-agent] Stopped. Robot results publisher started... INFO: Checking test criticality is deprecated and will be dropped in a future release! -Parsing output xml: Done! -Copying log files to build dir: Done! -Assigning results to build: Done! -Checking thresholds: Done! Done publishing Robot results. [PostBuildScript] - [INFO] Executing post build scripts. [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins12602930278541337886.sh ---> sysstat.sh [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins83141288678297580.sh ---> package-listing.sh ++ facter osfamily ++ tr '[:upper:]' '[:lower:]' + OS_FAMILY=debian + workspace=/w/workspace/policy-pap-newdelhi-project-csit-pap + START_PACKAGES=/tmp/packages_start.txt + END_PACKAGES=/tmp/packages_end.txt + DIFF_PACKAGES=/tmp/packages_diff.txt + PACKAGES=/tmp/packages_start.txt + '[' /w/workspace/policy-pap-newdelhi-project-csit-pap ']' + PACKAGES=/tmp/packages_end.txt + case "${OS_FAMILY}" in + dpkg -l + grep '^ii' + '[' -f /tmp/packages_start.txt ']' + '[' -f /tmp/packages_end.txt ']' + diff /tmp/packages_start.txt /tmp/packages_end.txt + '[' /w/workspace/policy-pap-newdelhi-project-csit-pap ']' + mkdir -p /w/workspace/policy-pap-newdelhi-project-csit-pap/archives/ + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-pap-newdelhi-project-csit-pap/archives/ [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins12936599351292569843.sh ---> capture-instance-metadata.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-newdelhi-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-M6Od from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-M6Od/bin to PATH INFO: Running in OpenStack, capturing instance metadata [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins8915221664682835584.sh provisioning config files... copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-pap-newdelhi-project-csit-pap@tmp/config869483089150114589tmp Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] Run condition [Regular expression match] preventing perform for step [Provide Configuration files] [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SERVER_ID=logs [EnvInject] - Variables injected successfully. [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins16931416145102673829.sh ---> create-netrc.sh [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins5157987291895095974.sh ---> python-tools-install.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-newdelhi-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-M6Od from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-M6Od/bin to PATH [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins7789146553995145825.sh ---> sudo-logs.sh Archiving 'sudo' log.. [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins14890703495805726904.sh ---> job-cost.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-newdelhi-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-M6Od from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 lf-activate-venv(): INFO: Adding /tmp/venv-M6Od/bin to PATH INFO: No Stack... INFO: Retrieving Pricing Info for: v3-standard-8 INFO: Archiving Costs [policy-pap-newdelhi-project-csit-pap] $ /bin/bash -l /tmp/jenkins17616677049301893304.sh ---> logs-deploy.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-newdelhi-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-M6Od from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-M6Od/bin to PATH INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-pap-newdelhi-project-csit-pap/114 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt Archives upload complete. INFO: archiving logs to Nexus ---> uname -a: Linux prd-ubuntu1804-docker-8c-8g-37322 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux ---> lscpu: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 8 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC-Rome Processor Stepping: 0 CPU MHz: 2799.998 BogoMIPS: 5599.99 Virtualization: AMD-V Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-7 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities ---> nproc: 8 ---> df -h: Filesystem Size Used Avail Use% Mounted on udev 16G 0 16G 0% /dev tmpfs 3.2G 708K 3.2G 1% /run /dev/vda1 155G 14G 142G 9% / tmpfs 16G 0 16G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/vda15 105M 4.4M 100M 5% /boot/efi tmpfs 3.2G 0 3.2G 0% /run/user/1001 ---> free -m: total used free shared buff/cache available Mem: 32167 895 25099 0 6171 30815 Swap: 1023 0 1023 ---> ip addr: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 link/ether fa:16:3e:f1:97:41 brd ff:ff:ff:ff:ff:ff inet 10.30.106.182/23 brd 10.30.107.255 scope global dynamic ens3 valid_lft 85993sec preferred_lft 85993sec inet6 fe80::f816:3eff:fef1:9741/64 scope link valid_lft forever preferred_lft forever 3: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:9f:67:1f:1a brd ff:ff:ff:ff:ff:ff inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:9fff:fe67:1f1a/64 scope link valid_lft forever preferred_lft forever ---> sar -b -r -n DEV: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-37322) 09/09/24 _x86_64_ (8 CPU) 16:59:28 LINUX RESTART (8 CPU) 17:00:02 tps rtps wtps bread/s bwrtn/s 17:01:01 329.01 43.38 285.63 1958.45 31991.73 17:02:01 368.19 22.01 346.18 2658.62 162651.42 17:03:01 343.64 9.50 334.14 406.67 41213.88 17:04:01 88.97 0.23 88.74 11.33 32530.04 17:05:01 30.46 0.02 30.44 3.33 23824.43 17:06:01 71.59 1.27 70.32 89.45 23865.11 Average: 204.97 12.65 192.32 851.57 52737.05 17:00:02 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 17:01:01 30113788 31686440 2825432 8.58 69804 1813828 1415016 4.16 882896 1650020 154060 17:02:01 25919232 31426496 7019988 21.31 125744 5537948 4576124 13.46 1218252 5284168 1100 17:03:01 23483636 29457020 9455584 28.71 160848 5915116 9173296 26.99 3449368 5388192 57692 17:04:01 23560844 29556344 9378376 28.47 170172 5926732 9064540 26.67 3363052 5393888 328 17:05:01 23882812 29873676 9056408 27.49 170480 5926668 7388452 21.74 3064776 5386076 40 17:06:01 25711908 31564284 7227312 21.94 172220 5799248 1609084 4.73 1411808 5261688 1948 Average: 25445370 30594043 7493850 22.75 144878 5153257 5537752 16.29 2231692 4727339 35861 17:00:02 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 17:01:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:01:01 ens3 172.00 136.16 1191.33 52.86 0.00 0.00 0.00 0.00 17:01:01 lo 1.42 1.42 0.16 0.16 0.00 0.00 0.00 0.00 17:02:01 vethf041614 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:02:01 vethdb6d534 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:02:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:02:01 veth051f7af 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:03:01 docker0 10.25 13.85 1.91 220.42 0.00 0.00 0.00 0.00 17:03:01 veth1b19d96 0.65 0.92 0.07 0.39 0.00 0.00 0.00 0.00 17:03:01 ens3 1648.71 957.86 33949.91 138.30 0.00 0.00 0.00 0.00 17:03:01 vethd830dee 7.10 7.07 1.25 0.75 0.00 0.00 0.00 0.00 17:04:01 docker0 2.47 3.32 0.18 65.36 0.00 0.00 0.00 0.00 17:04:01 veth7a85e0c 2.27 2.05 1.73 1.86 0.00 0.00 0.00 0.00 17:04:01 veth1b19d96 0.30 0.23 0.02 0.01 0.00 0.00 0.00 0.00 17:04:01 ens3 7.32 6.33 66.18 1.19 0.00 0.00 0.00 0.00 17:05:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:05:01 ens3 15.80 15.06 9.06 14.72 0.00 0.00 0.00 0.00 17:05:01 vethd830dee 61.06 60.64 17.94 40.63 0.00 0.00 0.00 0.00 17:05:01 veth7e3e1d3 5.45 7.15 0.83 0.96 0.00 0.00 0.00 0.00 17:06:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:06:01 ens3 41.19 35.71 65.84 16.92 0.00 0.00 0.00 0.00 17:06:01 lo 27.60 27.60 2.55 2.55 0.00 0.00 0.00 0.00 Average: docker0 2.12 2.87 0.35 47.76 0.00 0.00 0.00 0.00 Average: ens3 261.53 149.96 5614.81 24.64 0.00 0.00 0.00 0.00 Average: lo 3.97 3.97 0.37 0.37 0.00 0.00 0.00 0.00 ---> sar -P ALL: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-37322) 09/09/24 _x86_64_ (8 CPU) 16:59:28 LINUX RESTART (8 CPU) 17:00:02 CPU %user %nice %system %iowait %steal %idle 17:01:01 all 9.53 0.00 0.90 3.41 0.03 86.12 17:01:01 0 16.87 0.00 1.00 2.16 0.03 79.94 17:01:01 1 17.20 0.00 1.12 0.82 0.05 80.81 17:01:01 2 12.39 0.00 0.78 0.36 0.05 86.43 17:01:01 3 10.10 0.00 0.77 0.75 0.02 88.37 17:01:01 4 2.65 0.00 0.63 0.27 0.02 96.44 17:01:01 5 1.17 0.00 0.48 0.05 0.02 98.29 17:01:01 6 3.38 0.00 0.27 0.07 0.02 96.26 17:01:01 7 12.49 0.00 2.14 22.86 0.07 62.43 17:02:01 all 13.53 0.00 5.47 11.11 0.06 69.83 17:02:01 0 20.21 0.00 5.18 24.89 0.05 49.67 17:02:01 1 12.98 0.00 5.33 2.99 0.05 78.65 17:02:01 2 14.95 0.00 5.51 2.52 0.07 76.95 17:02:01 3 12.70 0.00 4.77 8.86 0.05 73.62 17:02:01 4 11.19 0.00 4.63 1.83 0.03 82.32 17:02:01 5 12.66 0.00 5.49 8.11 0.10 73.63 17:02:01 6 11.84 0.00 7.29 27.78 0.05 53.03 17:02:01 7 11.70 0.00 5.56 11.88 0.05 70.81 17:03:01 all 27.08 0.00 3.73 3.53 0.12 65.54 17:03:01 0 22.27 0.00 3.22 3.69 0.08 70.73 17:03:01 1 31.36 0.00 4.06 1.33 0.17 63.08 17:03:01 2 29.97 0.00 4.40 6.11 0.10 59.42 17:03:01 3 25.98 0.00 3.86 2.03 0.12 68.00 17:03:01 4 33.55 0.00 3.83 0.17 0.08 62.36 17:03:01 5 25.75 0.00 3.57 8.17 0.13 62.38 17:03:01 6 25.68 0.00 3.26 3.60 0.08 67.37 17:03:01 7 22.15 0.00 3.56 3.16 0.13 70.99 17:04:01 all 7.40 0.00 1.30 2.41 0.07 88.82 17:04:01 0 5.48 0.00 0.89 0.22 0.07 93.35 17:04:01 1 9.68 0.00 1.59 0.75 0.07 87.91 17:04:01 2 8.02 0.00 0.97 11.91 0.12 78.97 17:04:01 3 6.86 0.00 1.15 1.82 0.05 90.12 17:04:01 4 7.32 0.00 1.44 0.12 0.07 91.05 17:04:01 5 7.18 0.00 1.49 2.28 0.07 88.97 17:04:01 6 7.23 0.00 1.63 2.01 0.08 89.05 17:04:01 7 7.45 0.00 1.29 0.20 0.03 91.03 17:05:01 all 1.84 0.00 0.50 2.20 0.07 95.39 17:05:01 0 1.60 0.00 0.63 0.00 0.05 97.72 17:05:01 1 1.67 0.00 0.50 0.40 0.03 97.40 17:05:01 2 1.69 0.00 0.42 13.13 0.08 84.68 17:05:01 3 1.62 0.00 0.63 0.58 0.03 97.13 17:05:01 4 2.38 0.00 0.45 0.07 0.05 97.06 17:05:01 5 1.79 0.00 0.55 2.01 0.05 95.60 17:05:01 6 2.01 0.00 0.40 0.54 0.08 96.97 17:05:01 7 1.99 0.00 0.40 0.85 0.13 96.63 17:06:01 all 4.95 0.00 0.54 7.73 0.05 86.73 17:06:01 0 8.81 0.00 0.67 4.88 0.05 85.59 17:06:01 1 10.29 0.00 0.57 1.55 0.07 87.52 17:06:01 2 0.62 0.00 0.57 21.14 0.05 77.63 17:06:01 3 0.67 0.00 0.43 3.56 0.05 95.29 17:06:01 4 13.35 0.00 0.59 4.20 0.08 81.78 17:06:01 5 0.70 0.00 0.50 1.17 0.03 97.60 17:06:01 6 1.68 0.00 0.53 2.01 0.05 95.72 17:06:01 7 3.58 0.00 0.43 23.35 0.03 72.60 Average: all 10.71 0.00 2.07 5.06 0.07 82.09 Average: 0 12.51 0.00 1.93 5.96 0.06 79.54 Average: 1 13.84 0.00 2.19 1.30 0.07 82.59 Average: 2 11.25 0.00 2.10 9.24 0.08 77.33 Average: 3 9.64 0.00 1.94 2.94 0.05 85.44 Average: 4 11.74 0.00 1.93 1.11 0.06 85.16 Average: 5 8.20 0.00 2.01 3.63 0.07 86.09 Average: 6 8.64 0.00 2.23 5.98 0.06 83.10 Average: 7 9.87 0.00 2.23 10.35 0.08 77.48