Started by timer Running as SYSTEM [EnvInject] - Loading node environment variables. Building remotely on prd-ubuntu1804-docker-8c-8g-30748 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-pap-newdelhi-project-csit-pap [ssh-agent] Looking for ssh-agent implementation... [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) $ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-eV3j5C7bRVrp/agent.2097 SSH_AGENT_PID=2099 [ssh-agent] Started. Running ssh-add (command line suppressed) Identity added: /w/workspace/policy-pap-newdelhi-project-csit-pap@tmp/private_key_6138194625593028563.key (/w/workspace/policy-pap-newdelhi-project-csit-pap@tmp/private_key_6138194625593028563.key) [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) The recommended git tool is: NONE using credential onap-jenkins-ssh Wiping out workspace first. Cloning the remote Git repository Cloning repository git://cloud.onap.org/mirror/policy/docker.git > git init /w/workspace/policy-pap-newdelhi-project-csit-pap # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git > git --version # timeout=10 > git --version # 'git version 2.17.1' using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 Avoid second fetch > git rev-parse refs/remotes/origin/newdelhi^{commit} # timeout=10 Checking out Revision a0de87f9d2d88fd7f870703053c99c7149d608ec (refs/remotes/origin/newdelhi) > git config core.sparsecheckout # timeout=10 > git checkout -f a0de87f9d2d88fd7f870703053c99c7149d608ec # timeout=30 Commit message: "Fix timeout in pap CSIT for auditing undeploys" > git rev-list --no-walk a0de87f9d2d88fd7f870703053c99c7149d608ec # timeout=10 provisioning config files... copy managed file [npmrc] to file:/home/jenkins/.npmrc copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins6496948318696973815.sh ---> python-tools-install.sh Setup pyenv: * system (set by /opt/pyenv/version) * 3.8.13 (set by /opt/pyenv/version) * 3.9.13 (set by /opt/pyenv/version) * 3.10.6 (set by /opt/pyenv/version) lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-H818 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-H818/bin to PATH Generating Requirements File Python 3.10.6 pip 24.2 from /tmp/venv-H818/lib/python3.10/site-packages/pip (python 3.10) appdirs==1.4.4 argcomplete==3.5.0 aspy.yaml==1.3.0 attrs==24.2.0 autopage==0.5.2 beautifulsoup4==4.12.3 boto3==1.34.159 botocore==1.34.159 bs4==0.0.2 cachetools==5.4.0 certifi==2024.7.4 cffi==1.17.0 cfgv==3.4.0 chardet==5.2.0 charset-normalizer==3.3.2 click==8.1.7 cliff==4.7.0 cmd2==2.4.3 cryptography==3.3.2 debtcollector==3.0.0 decorator==5.1.1 defusedxml==0.7.1 Deprecated==1.2.14 distlib==0.3.8 dnspython==2.6.1 docker==4.2.2 dogpile.cache==1.3.3 email_validator==2.2.0 filelock==3.15.4 future==1.0.0 gitdb==4.0.11 GitPython==3.1.43 google-auth==2.33.0 httplib2==0.22.0 identify==2.6.0 idna==3.7 importlib-resources==1.5.0 iso8601==2.1.0 Jinja2==3.1.4 jmespath==1.0.1 jsonpatch==1.33 jsonpointer==3.0.0 jsonschema==4.23.0 jsonschema-specifications==2023.12.1 keystoneauth1==5.7.0 kubernetes==30.1.0 lftools==0.37.10 lxml==5.3.0 MarkupSafe==2.1.5 msgpack==1.0.8 multi_key_dict==2.0.3 munch==4.0.0 netaddr==1.3.0 netifaces==0.11.0 niet==1.4.2 nodeenv==1.9.1 oauth2client==4.1.3 oauthlib==3.2.2 openstacksdk==3.3.0 os-client-config==2.1.0 os-service-types==1.7.0 osc-lib==3.1.0 oslo.config==9.5.0 oslo.context==5.5.0 oslo.i18n==6.3.0 oslo.log==6.1.1 oslo.serialization==5.4.0 oslo.utils==7.2.0 packaging==24.1 pbr==6.0.0 platformdirs==4.2.2 prettytable==3.11.0 pyasn1==0.6.0 pyasn1_modules==0.4.0 pycparser==2.22 pygerrit2==2.0.15 PyGithub==2.3.0 PyJWT==2.9.0 PyNaCl==1.5.0 pyparsing==2.4.7 pyperclip==1.9.0 pyrsistent==0.20.0 python-cinderclient==9.5.0 python-dateutil==2.9.0.post0 python-heatclient==3.5.0 python-jenkins==1.8.2 python-keystoneclient==5.4.0 python-magnumclient==4.6.0 python-openstackclient==7.0.0 python-swiftclient==4.6.0 PyYAML==6.0.2 referencing==0.35.1 requests==2.32.3 requests-oauthlib==2.0.0 requestsexceptions==1.4.0 rfc3986==2.0.0 rpds-py==0.20.0 rsa==4.9 ruamel.yaml==0.18.6 ruamel.yaml.clib==0.2.8 s3transfer==0.10.2 simplejson==3.19.2 six==1.16.0 smmap==5.0.1 soupsieve==2.6 stevedore==5.2.0 tabulate==0.9.0 toml==0.10.2 tomlkit==0.13.0 tqdm==4.66.5 typing_extensions==4.12.2 tzdata==2024.1 urllib3==1.26.19 virtualenv==20.26.3 wcwidth==0.2.13 websocket-client==1.8.0 wrapt==1.16.0 xdg==6.0.0 xmltodict==0.13.0 yq==3.4.3 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SET_JDK_VERSION=openjdk17 GIT_URL="git://cloud.onap.org/mirror" [EnvInject] - Variables injected successfully. [policy-pap-newdelhi-project-csit-pap] $ /bin/sh /tmp/jenkins10193863751434239533.sh ---> update-java-alternatives.sh ---> Updating Java version ---> Ubuntu/Debian system detected update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode openjdk version "17.0.4" 2022-07-19 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' [EnvInject] - Variables injected successfully. [policy-pap-newdelhi-project-csit-pap] $ /bin/sh -xe /tmp/jenkins12794117638773865317.sh + /w/workspace/policy-pap-newdelhi-project-csit-pap/csit/run-project-csit.sh pap WARNING! Using --password via the CLI is insecure. Use --password-stdin. WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded docker: 'compose' is not a docker command. See 'docker --help' Docker Compose Plugin not installed. Installing now... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 60.0M 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 60.0M 100 60.0M 0 0 88.4M 0 --:--:-- --:--:-- --:--:-- 137M Setting project configuration for: pap Configuring docker compose... Starting apex-pdp application with Grafana api Pulling grafana Pulling prometheus Pulling pap Pulling zookeeper Pulling simulator Pulling kafka Pulling apex-pdp Pulling mariadb Pulling policy-db-migrator Pulling 31e352740f53 Pulling fs layer ecc4de98d537 Pulling fs layer bda0b253c68f Pulling fs layer b9357b55a7a5 Pulling fs layer 4c3047628e17 Pulling fs layer 6cf350721225 Pulling fs layer de723b4c7ed9 Pulling fs layer 4c3047628e17 Waiting 6cf350721225 Waiting de723b4c7ed9 Waiting b9357b55a7a5 Waiting 31e352740f53 Pulling fs layer ad1782e4d1ef Pulling fs layer bc8105c6553b Pulling fs layer 929241f867bb Pulling fs layer 37728a7352e6 Pulling fs layer 3f40c7aa46a6 Pulling fs layer 353af139d39e Pulling fs layer ad1782e4d1ef Waiting bc8105c6553b Waiting 929241f867bb Waiting 37728a7352e6 Waiting 3f40c7aa46a6 Waiting 353af139d39e Waiting 31e352740f53 Pulling fs layer ecc4de98d537 Pulling fs layer 665dfb3388a1 Pulling fs layer f270a5fd7930 Pulling fs layer 9038eaba24f8 Pulling fs layer 665dfb3388a1 Waiting f270a5fd7930 Waiting 04a7796b82ca Pulling fs layer 04a7796b82ca Waiting 9038eaba24f8 Waiting 31e352740f53 Pulling fs layer ecc4de98d537 Pulling fs layer 145e9fcd3938 Pulling fs layer 4be774fd73e2 Pulling fs layer 71f834c33815 Pulling fs layer 145e9fcd3938 Waiting a40760cd2625 Pulling fs layer 4be774fd73e2 Waiting 71f834c33815 Waiting a40760cd2625 Waiting 114f99593bd8 Pulling fs layer 114f99593bd8 Waiting bda0b253c68f Downloading [==================================================>] 292B/292B bda0b253c68f Download complete 31e352740f53 Pulling fs layer ecc4de98d537 Pulling fs layer 1fe734c5fee3 Pulling fs layer c8e6f0452a8e Pulling fs layer 0143f8517101 Pulling fs layer ee69cc1a77e2 Pulling fs layer 81667b400b57 Pulling fs layer ec3b6d0cc414 Pulling fs layer a8d3998ab21c Pulling fs layer 1fe734c5fee3 Waiting 89d6e2ec6372 Pulling fs layer 80096f8bb25e Pulling fs layer cbd359ebc87d Pulling fs layer 0143f8517101 Waiting ee69cc1a77e2 Waiting cbd359ebc87d Waiting ec3b6d0cc414 Waiting 80096f8bb25e Waiting 89d6e2ec6372 Waiting c8e6f0452a8e Waiting a8d3998ab21c Waiting 31e352740f53 Downloading [> ] 48.06kB/3.398MB 31e352740f53 Downloading [> ] 48.06kB/3.398MB 31e352740f53 Downloading [> ] 48.06kB/3.398MB 31e352740f53 Downloading [> ] 48.06kB/3.398MB 31e352740f53 Downloading [> ] 48.06kB/3.398MB ecc4de98d537 Downloading [> ] 539.6kB/73.93MB ecc4de98d537 Downloading [> ] 539.6kB/73.93MB ecc4de98d537 Downloading [> ] 539.6kB/73.93MB ecc4de98d537 Downloading [> ] 539.6kB/73.93MB b9357b55a7a5 Downloading [=> ] 3.001kB/127kB 9fa9226be034 Pulling fs layer 1617e25568b2 Pulling fs layer 92ff7cbea015 Pulling fs layer 3e818186829e Pulling fs layer e5110b75bf71 Pulling fs layer 9fa9226be034 Waiting 154ef881db4f Pulling fs layer 1617e25568b2 Waiting eaafa8ad3e2d Pulling fs layer 92ff7cbea015 Waiting fea56ff08967 Pulling fs layer 3e818186829e Waiting b9357b55a7a5 Verifying Checksum 6e62e059c561 Pulling fs layer 443ffcabdce2 Pulling fs layer e5110b75bf71 Waiting d59855f97034 Pulling fs layer 443ffcabdce2 Waiting fea56ff08967 Waiting 6e62e059c561 Waiting b32c911ea1d7 Pulling fs layer d59855f97034 Waiting b32c911ea1d7 Waiting 154ef881db4f Waiting 10ac4908093d Pulling fs layer 44779101e748 Pulling fs layer a721db3e3f3d Pulling fs layer 1850a929b84a Pulling fs layer 397a918c7da3 Pulling fs layer 806be17e856d Pulling fs layer 634de6c90876 Pulling fs layer 1850a929b84a Waiting cd00854cfb1a Pulling fs layer 10ac4908093d Waiting a721db3e3f3d Waiting 397a918c7da3 Waiting 44779101e748 Waiting 806be17e856d Waiting 634de6c90876 Waiting 4c3047628e17 Downloading [==================================================>] 1.324kB/1.324kB 4c3047628e17 Download complete 4abcf2066143 Pulling fs layer 5c277da153ce Pulling fs layer 85ed0bf0f127 Pulling fs layer a59a4ddf8225 Pulling fs layer 2d9ac7a96b08 Pulling fs layer c9a66980b76c Pulling fs layer 562cf3de6818 Pulling fs layer bfcc9123594e Pulling fs layer f73d5405641d Pulling fs layer 0c9bbf800250 Pulling fs layer 4abcf2066143 Waiting 5c277da153ce Waiting 85ed0bf0f127 Waiting bfcc9123594e Waiting a59a4ddf8225 Waiting f73d5405641d Waiting 2d9ac7a96b08 Waiting 562cf3de6818 Waiting c9a66980b76c Waiting 31e352740f53 Verifying Checksum 31e352740f53 Verifying Checksum 31e352740f53 Verifying Checksum 31e352740f53 Download complete 31e352740f53 Download complete 31e352740f53 Download complete 31e352740f53 Download complete 31e352740f53 Verifying Checksum 31e352740f53 Download complete 31e352740f53 Extracting [> ] 65.54kB/3.398MB 31e352740f53 Extracting [> ] 65.54kB/3.398MB 31e352740f53 Extracting [> ] 65.54kB/3.398MB 31e352740f53 Extracting [> ] 65.54kB/3.398MB 31e352740f53 Extracting [> ] 65.54kB/3.398MB 6cf350721225 Downloading [> ] 539.6kB/98.32MB de723b4c7ed9 Downloading [==================================================>] 1.297kB/1.297kB de723b4c7ed9 Download complete ad1782e4d1ef Downloading [> ] 539.6kB/180.4MB ecc4de98d537 Downloading [========> ] 11.89MB/73.93MB ecc4de98d537 Downloading [========> ] 11.89MB/73.93MB ecc4de98d537 Downloading [========> ] 11.89MB/73.93MB ecc4de98d537 Downloading [========> ] 11.89MB/73.93MB 31e352740f53 Extracting [===========> ] 786.4kB/3.398MB 31e352740f53 Extracting [===========> ] 786.4kB/3.398MB 31e352740f53 Extracting [===========> ] 786.4kB/3.398MB 31e352740f53 Extracting [===========> ] 786.4kB/3.398MB 31e352740f53 Extracting [===========> ] 786.4kB/3.398MB ad1782e4d1ef Downloading [===> ] 11.35MB/180.4MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB ecc4de98d537 Downloading [=================> ] 26.49MB/73.93MB ecc4de98d537 Downloading [=================> ] 26.49MB/73.93MB ecc4de98d537 Downloading [=================> ] 26.49MB/73.93MB ecc4de98d537 Downloading [=================> ] 26.49MB/73.93MB 6cf350721225 Downloading [===> ] 7.028MB/98.32MB 4798a7e93601 Pulling fs layer a453f30e82bf Pulling fs layer 016e383f3f47 Pulling fs layer f7d27dafad0a Pulling fs layer 56ccc8be1ca0 Pulling fs layer f77f01ac624c Pulling fs layer 1c6e35a73ed7 Pulling fs layer aa5e151b62ff Pulling fs layer 262d375318c3 Pulling fs layer 28a7d18ebda4 Pulling fs layer bdc615dfc787 Pulling fs layer ab973a5038b6 Pulling fs layer 5aee3e0528f7 Pulling fs layer 4798a7e93601 Waiting a453f30e82bf Waiting 016e383f3f47 Waiting f7d27dafad0a Waiting 56ccc8be1ca0 Waiting f77f01ac624c Waiting 1c6e35a73ed7 Waiting aa5e151b62ff Waiting 262d375318c3 Waiting 28a7d18ebda4 Waiting bdc615dfc787 Waiting ab973a5038b6 Waiting 5aee3e0528f7 Waiting 4798a7e93601 Pulling fs layer a453f30e82bf Pulling fs layer 016e383f3f47 Pulling fs layer f7d27dafad0a Pulling fs layer 56ccc8be1ca0 Pulling fs layer f77f01ac624c Pulling fs layer 1c6e35a73ed7 Pulling fs layer aa5e151b62ff Pulling fs layer 262d375318c3 Pulling fs layer 28a7d18ebda4 Pulling fs layer bdc615dfc787 Pulling fs layer 33966fd36306 Pulling fs layer 8b4455fb60b9 Pulling fs layer 262d375318c3 Waiting 4798a7e93601 Waiting a453f30e82bf Waiting 016e383f3f47 Waiting f7d27dafad0a Waiting 56ccc8be1ca0 Waiting f77f01ac624c Waiting 1c6e35a73ed7 Waiting aa5e151b62ff Waiting 28a7d18ebda4 Waiting bdc615dfc787 Waiting 33966fd36306 Waiting 8b4455fb60b9 Waiting 31e352740f53 Pull complete 31e352740f53 Pull complete 31e352740f53 Pull complete 31e352740f53 Pull complete 31e352740f53 Pull complete ad1782e4d1ef Downloading [=======> ] 26.49MB/180.4MB ecc4de98d537 Downloading [===========================> ] 40.55MB/73.93MB ecc4de98d537 Downloading [===========================> ] 40.55MB/73.93MB ecc4de98d537 Downloading [===========================> ] 40.55MB/73.93MB ecc4de98d537 Downloading [===========================> ] 40.55MB/73.93MB 6cf350721225 Downloading [======> ] 12.98MB/98.32MB ad1782e4d1ef Downloading [==========> ] 38.93MB/180.4MB ecc4de98d537 Downloading [===================================> ] 52.44MB/73.93MB ecc4de98d537 Downloading [===================================> ] 52.44MB/73.93MB ecc4de98d537 Downloading [===================================> ] 52.44MB/73.93MB ecc4de98d537 Downloading [===================================> ] 52.44MB/73.93MB 6cf350721225 Downloading [============> ] 23.79MB/98.32MB ad1782e4d1ef Downloading [==============> ] 51.36MB/180.4MB ecc4de98d537 Downloading [============================================> ] 65.96MB/73.93MB ecc4de98d537 Downloading [============================================> ] 65.96MB/73.93MB ecc4de98d537 Downloading [============================================> ] 65.96MB/73.93MB ecc4de98d537 Downloading [============================================> ] 65.96MB/73.93MB 6cf350721225 Downloading [===================> ] 37.85MB/98.32MB ecc4de98d537 Verifying Checksum ecc4de98d537 Download complete ecc4de98d537 Download complete ecc4de98d537 Download complete ecc4de98d537 Verifying Checksum ecc4de98d537 Download complete ad1782e4d1ef Downloading [=================> ] 64.34MB/180.4MB bc8105c6553b Downloading [=> ] 3.002kB/84.13kB bc8105c6553b Downloading [==================================================>] 84.13kB/84.13kB bc8105c6553b Download complete 929241f867bb Downloading [==================================================>] 92B/92B 929241f867bb Download complete 6cf350721225 Downloading [==========================> ] 52.44MB/98.32MB 37728a7352e6 Downloading [==================================================>] 92B/92B 37728a7352e6 Verifying Checksum 37728a7352e6 Download complete 3f40c7aa46a6 Downloading [==================================================>] 302B/302B 3f40c7aa46a6 Verifying Checksum 3f40c7aa46a6 Download complete ecc4de98d537 Extracting [> ] 557.1kB/73.93MB ecc4de98d537 Extracting [> ] 557.1kB/73.93MB ecc4de98d537 Extracting [> ] 557.1kB/73.93MB ecc4de98d537 Extracting [> ] 557.1kB/73.93MB ad1782e4d1ef Downloading [======================> ] 80.56MB/180.4MB 353af139d39e Downloading [> ] 539.6kB/246.5MB 6cf350721225 Downloading [=================================> ] 66.5MB/98.32MB ecc4de98d537 Extracting [===> ] 4.456MB/73.93MB ecc4de98d537 Extracting [===> ] 4.456MB/73.93MB ecc4de98d537 Extracting [===> ] 4.456MB/73.93MB ecc4de98d537 Extracting [===> ] 4.456MB/73.93MB ad1782e4d1ef Downloading [==========================> ] 97.32MB/180.4MB 353af139d39e Downloading [> ] 3.784MB/246.5MB 6cf350721225 Downloading [==========================================> ] 82.72MB/98.32MB ecc4de98d537 Extracting [=======> ] 11.14MB/73.93MB ecc4de98d537 Extracting [=======> ] 11.14MB/73.93MB ecc4de98d537 Extracting [=======> ] 11.14MB/73.93MB ecc4de98d537 Extracting [=======> ] 11.14MB/73.93MB ad1782e4d1ef Downloading [==============================> ] 111.4MB/180.4MB 353af139d39e Downloading [=> ] 7.028MB/246.5MB 6cf350721225 Downloading [=================================================> ] 96.78MB/98.32MB 6cf350721225 Verifying Checksum 6cf350721225 Download complete 665dfb3388a1 Downloading [==================================================>] 303B/303B 665dfb3388a1 Verifying Checksum 665dfb3388a1 Download complete ecc4de98d537 Extracting [===========> ] 17.27MB/73.93MB ecc4de98d537 Extracting [===========> ] 17.27MB/73.93MB ecc4de98d537 Extracting [===========> ] 17.27MB/73.93MB ecc4de98d537 Extracting [===========> ] 17.27MB/73.93MB ad1782e4d1ef Downloading [==================================> ] 125.4MB/180.4MB f270a5fd7930 Downloading [> ] 539.6kB/159.1MB 353af139d39e Downloading [==> ] 12.98MB/246.5MB ecc4de98d537 Extracting [===============> ] 23.4MB/73.93MB ecc4de98d537 Extracting [===============> ] 23.4MB/73.93MB ecc4de98d537 Extracting [===============> ] 23.4MB/73.93MB ecc4de98d537 Extracting [===============> ] 23.4MB/73.93MB ad1782e4d1ef Downloading [======================================> ] 138.4MB/180.4MB f270a5fd7930 Downloading [=> ] 3.243MB/159.1MB 353af139d39e Downloading [====> ] 21.63MB/246.5MB ecc4de98d537 Extracting [====================> ] 30.08MB/73.93MB ecc4de98d537 Extracting [====================> ] 30.08MB/73.93MB ecc4de98d537 Extracting [====================> ] 30.08MB/73.93MB ecc4de98d537 Extracting [====================> ] 30.08MB/73.93MB ad1782e4d1ef Downloading [=========================================> ] 150.8MB/180.4MB f270a5fd7930 Downloading [==> ] 8.109MB/159.1MB 353af139d39e Downloading [======> ] 31.36MB/246.5MB ad1782e4d1ef Downloading [=============================================> ] 163.3MB/180.4MB ecc4de98d537 Extracting [========================> ] 36.77MB/73.93MB ecc4de98d537 Extracting [========================> ] 36.77MB/73.93MB ecc4de98d537 Extracting [========================> ] 36.77MB/73.93MB ecc4de98d537 Extracting [========================> ] 36.77MB/73.93MB f270a5fd7930 Downloading [=====> ] 16.22MB/159.1MB 353af139d39e Downloading [=========> ] 45.42MB/246.5MB ad1782e4d1ef Downloading [================================================> ] 175.7MB/180.4MB ecc4de98d537 Extracting [=============================> ] 42.89MB/73.93MB ecc4de98d537 Extracting [=============================> ] 42.89MB/73.93MB ecc4de98d537 Extracting [=============================> ] 42.89MB/73.93MB ecc4de98d537 Extracting [=============================> ] 42.89MB/73.93MB f270a5fd7930 Downloading [========> ] 25.95MB/159.1MB 353af139d39e Downloading [============> ] 61.64MB/246.5MB ad1782e4d1ef Verifying Checksum ad1782e4d1ef Download complete 9038eaba24f8 Downloading [==================================================>] 1.153kB/1.153kB 9038eaba24f8 Verifying Checksum 9038eaba24f8 Download complete 04a7796b82ca Downloading [==================================================>] 1.127kB/1.127kB 04a7796b82ca Verifying Checksum 04a7796b82ca Download complete f270a5fd7930 Downloading [===========> ] 35.14MB/159.1MB ecc4de98d537 Extracting [================================> ] 48.46MB/73.93MB ecc4de98d537 Extracting [================================> ] 48.46MB/73.93MB ecc4de98d537 Extracting [================================> ] 48.46MB/73.93MB ecc4de98d537 Extracting [================================> ] 48.46MB/73.93MB 353af139d39e Downloading [==============> ] 72.99MB/246.5MB 145e9fcd3938 Download complete 4be774fd73e2 Downloading [=> ] 3.001kB/127.4kB 4be774fd73e2 Verifying Checksum 4be774fd73e2 Download complete 71f834c33815 Downloading [==================================================>] 1.147kB/1.147kB 71f834c33815 Verifying Checksum 71f834c33815 Download complete ad1782e4d1ef Extracting [> ] 557.1kB/180.4MB a40760cd2625 Downloading [> ] 539.6kB/84.46MB f270a5fd7930 Downloading [===============> ] 49.2MB/159.1MB ecc4de98d537 Extracting [====================================> ] 53.48MB/73.93MB ecc4de98d537 Extracting [====================================> ] 53.48MB/73.93MB ecc4de98d537 Extracting [====================================> ] 53.48MB/73.93MB ecc4de98d537 Extracting [====================================> ] 53.48MB/73.93MB 353af139d39e Downloading [==================> ] 89.21MB/246.5MB ad1782e4d1ef Extracting [=> ] 4.456MB/180.4MB a40760cd2625 Downloading [====> ] 8.109MB/84.46MB f270a5fd7930 Downloading [====================> ] 64.88MB/159.1MB ecc4de98d537 Extracting [======================================> ] 57.38MB/73.93MB ecc4de98d537 Extracting [======================================> ] 57.38MB/73.93MB ecc4de98d537 Extracting [======================================> ] 57.38MB/73.93MB ecc4de98d537 Extracting [======================================> ] 57.38MB/73.93MB 353af139d39e Downloading [=====================> ] 104.9MB/246.5MB ad1782e4d1ef Extracting [====> ] 16.71MB/180.4MB a40760cd2625 Downloading [============> ] 20.54MB/84.46MB f270a5fd7930 Downloading [=========================> ] 81.1MB/159.1MB ecc4de98d537 Extracting [=========================================> ] 61.83MB/73.93MB ecc4de98d537 Extracting [=========================================> ] 61.83MB/73.93MB ecc4de98d537 Extracting [=========================================> ] 61.83MB/73.93MB 353af139d39e Downloading [========================> ] 121.7MB/246.5MB ecc4de98d537 Extracting [=========================================> ] 61.83MB/73.93MB ad1782e4d1ef Extracting [=======> ] 27.85MB/180.4MB a40760cd2625 Downloading [===================> ] 33.52MB/84.46MB f270a5fd7930 Downloading [==============================> ] 96.24MB/159.1MB ecc4de98d537 Extracting [=============================================> ] 67.96MB/73.93MB ecc4de98d537 Extracting [=============================================> ] 67.96MB/73.93MB ecc4de98d537 Extracting [=============================================> ] 67.96MB/73.93MB ecc4de98d537 Extracting [=============================================> ] 67.96MB/73.93MB 353af139d39e Downloading [===========================> ] 136.8MB/246.5MB ad1782e4d1ef Extracting [==========> ] 37.32MB/180.4MB a40760cd2625 Downloading [============================> ] 48.12MB/84.46MB f270a5fd7930 Downloading [===================================> ] 112.5MB/159.1MB 353af139d39e Downloading [==============================> ] 151.4MB/246.5MB ecc4de98d537 Extracting [=================================================> ] 73.53MB/73.93MB ecc4de98d537 Extracting [=================================================> ] 73.53MB/73.93MB ecc4de98d537 Extracting [=================================================> ] 73.53MB/73.93MB ecc4de98d537 Extracting [=================================================> ] 73.53MB/73.93MB ad1782e4d1ef Extracting [============> ] 46.24MB/180.4MB ecc4de98d537 Extracting [==================================================>] 73.93MB/73.93MB ecc4de98d537 Extracting [==================================================>] 73.93MB/73.93MB ecc4de98d537 Extracting [==================================================>] 73.93MB/73.93MB ecc4de98d537 Extracting [==================================================>] 73.93MB/73.93MB a40760cd2625 Downloading [===================================> ] 59.47MB/84.46MB f270a5fd7930 Downloading [========================================> ] 127.6MB/159.1MB 353af139d39e Downloading [==================================> ] 168.7MB/246.5MB ecc4de98d537 Pull complete ecc4de98d537 Pull complete ecc4de98d537 Pull complete ecc4de98d537 Pull complete 665dfb3388a1 Extracting [==================================================>] 303B/303B bda0b253c68f Extracting [==================================================>] 292B/292B bda0b253c68f Extracting [==================================================>] 292B/292B 145e9fcd3938 Extracting [==================================================>] 294B/294B 145e9fcd3938 Extracting [==================================================>] 294B/294B 665dfb3388a1 Extracting [==================================================>] 303B/303B ad1782e4d1ef Extracting [================> ] 59.05MB/180.4MB a40760cd2625 Downloading [=========================================> ] 70.83MB/84.46MB f270a5fd7930 Downloading [=============================================> ] 144.9MB/159.1MB 353af139d39e Downloading [=====================================> ] 184.4MB/246.5MB ad1782e4d1ef Extracting [==================> ] 65.73MB/180.4MB 665dfb3388a1 Pull complete f270a5fd7930 Verifying Checksum f270a5fd7930 Download complete 114f99593bd8 Downloading [==================================================>] 1.119kB/1.119kB 114f99593bd8 Verifying Checksum 114f99593bd8 Download complete a40760cd2625 Downloading [===============================================> ] 80.56MB/84.46MB 353af139d39e Downloading [=======================================> ] 196.3MB/246.5MB 145e9fcd3938 Pull complete bda0b253c68f Pull complete 4be774fd73e2 Extracting [============> ] 32.77kB/127.4kB 4be774fd73e2 Extracting [==================================================>] 127.4kB/127.4kB 4be774fd73e2 Extracting [==================================================>] 127.4kB/127.4kB f270a5fd7930 Extracting [> ] 557.1kB/159.1MB b9357b55a7a5 Extracting [============> ] 32.77kB/127kB b9357b55a7a5 Extracting [==================================================>] 127kB/127kB b9357b55a7a5 Extracting [==================================================>] 127kB/127kB a40760cd2625 Verifying Checksum a40760cd2625 Download complete 1fe734c5fee3 Downloading [> ] 343kB/32.94MB ad1782e4d1ef Extracting [====================> ] 72.97MB/180.4MB c8e6f0452a8e Downloading [==================================================>] 1.076kB/1.076kB c8e6f0452a8e Verifying Checksum c8e6f0452a8e Download complete 0143f8517101 Downloading [============================> ] 3.003kB/5.324kB 0143f8517101 Download complete ee69cc1a77e2 Downloading [============================> ] 3.003kB/5.312kB ee69cc1a77e2 Downloading [==================================================>] 5.312kB/5.312kB ee69cc1a77e2 Verifying Checksum ee69cc1a77e2 Download complete 353af139d39e Downloading [===========================================> ] 212.5MB/246.5MB 81667b400b57 Downloading [==================================================>] 1.034kB/1.034kB 81667b400b57 Verifying Checksum f270a5fd7930 Extracting [=> ] 6.128MB/159.1MB ec3b6d0cc414 Downloading [==================================================>] 1.036kB/1.036kB ec3b6d0cc414 Verifying Checksum ec3b6d0cc414 Download complete b9357b55a7a5 Pull complete a8d3998ab21c Downloading [==========> ] 3.002kB/13.9kB a8d3998ab21c Downloading [==================================================>] 13.9kB/13.9kB a8d3998ab21c Verifying Checksum a8d3998ab21c Download complete 4c3047628e17 Extracting [==================================================>] 1.324kB/1.324kB 4c3047628e17 Extracting [==================================================>] 1.324kB/1.324kB 1fe734c5fee3 Downloading [============> ] 7.912MB/32.94MB 4be774fd73e2 Pull complete 89d6e2ec6372 Downloading [==========> ] 3.002kB/13.79kB 89d6e2ec6372 Downloading [==================================================>] 13.79kB/13.79kB 89d6e2ec6372 Verifying Checksum 89d6e2ec6372 Download complete 71f834c33815 Extracting [==================================================>] 1.147kB/1.147kB 71f834c33815 Extracting [==================================================>] 1.147kB/1.147kB ad1782e4d1ef Extracting [======================> ] 81.89MB/180.4MB 80096f8bb25e Downloading [==================================================>] 2.238kB/2.238kB 80096f8bb25e Verifying Checksum 80096f8bb25e Download complete cbd359ebc87d Downloading [==================================================>] 2.23kB/2.23kB cbd359ebc87d Download complete 353af139d39e Downloading [=============================================> ] 226.5MB/246.5MB 9fa9226be034 Downloading [> ] 15.3kB/783kB f270a5fd7930 Extracting [====> ] 15.6MB/159.1MB 9fa9226be034 Download complete 9fa9226be034 Extracting [==> ] 32.77kB/783kB 1fe734c5fee3 Downloading [============================> ] 18.92MB/32.94MB 1617e25568b2 Downloading [=> ] 15.3kB/480.9kB ad1782e4d1ef Extracting [========================> ] 88.57MB/180.4MB 1617e25568b2 Downloading [==================================================>] 480.9kB/480.9kB 1617e25568b2 Verifying Checksum 1617e25568b2 Download complete 71f834c33815 Pull complete 4c3047628e17 Pull complete 353af139d39e Downloading [================================================> ] 240.6MB/246.5MB 92ff7cbea015 Downloading [> ] 539.6kB/55.22MB f270a5fd7930 Extracting [========> ] 26.74MB/159.1MB 9fa9226be034 Extracting [==================================================>] 783kB/783kB 353af139d39e Verifying Checksum 353af139d39e Download complete 1fe734c5fee3 Downloading [========================================> ] 26.49MB/32.94MB 3e818186829e Downloading [> ] 506.8kB/50.12MB ad1782e4d1ef Extracting [=========================> ] 91.91MB/180.4MB a40760cd2625 Extracting [> ] 557.1kB/84.46MB 1fe734c5fee3 Verifying Checksum 1fe734c5fee3 Download complete 9fa9226be034 Pull complete 92ff7cbea015 Downloading [====> ] 4.865MB/55.22MB 1617e25568b2 Extracting [===> ] 32.77kB/480.9kB f270a5fd7930 Extracting [===========> ] 35.65MB/159.1MB 6cf350721225 Extracting [> ] 557.1kB/98.32MB a40760cd2625 Extracting [====> ] 7.799MB/84.46MB ad1782e4d1ef Extracting [==========================> ] 95.81MB/180.4MB 1617e25568b2 Extracting [==================================> ] 327.7kB/480.9kB 1fe734c5fee3 Extracting [> ] 360.4kB/32.94MB f270a5fd7930 Extracting [=============> ] 44.01MB/159.1MB 6cf350721225 Extracting [====> ] 9.47MB/98.32MB 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB a40760cd2625 Extracting [========> ] 15.04MB/84.46MB 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB ad1782e4d1ef Extracting [===========================> ] 99.16MB/180.4MB f270a5fd7930 Extracting [================> ] 51.25MB/159.1MB 1fe734c5fee3 Extracting [====> ] 2.884MB/32.94MB 6cf350721225 Extracting [=========> ] 17.83MB/98.32MB a40760cd2625 Extracting [=============> ] 23.4MB/84.46MB ad1782e4d1ef Extracting [============================> ] 101.9MB/180.4MB 6cf350721225 Extracting [============> ] 24.51MB/98.32MB 1fe734c5fee3 Extracting [======> ] 4.325MB/32.94MB f270a5fd7930 Extracting [=================> ] 56.82MB/159.1MB 1617e25568b2 Pull complete a40760cd2625 Extracting [=================> ] 30.08MB/84.46MB 6cf350721225 Extracting [================> ] 32.87MB/98.32MB ad1782e4d1ef Extracting [=============================> ] 105.8MB/180.4MB 1fe734c5fee3 Extracting [=========> ] 6.488MB/32.94MB f270a5fd7930 Extracting [===================> ] 62.39MB/159.1MB a40760cd2625 Extracting [======================> ] 37.88MB/84.46MB 6cf350721225 Extracting [======================> ] 44.01MB/98.32MB ad1782e4d1ef Extracting [==============================> ] 109.7MB/180.4MB 1fe734c5fee3 Extracting [=============> ] 9.011MB/32.94MB f270a5fd7930 Extracting [======================> ] 72.97MB/159.1MB a40760cd2625 Extracting [==========================> ] 44.56MB/84.46MB ad1782e4d1ef Extracting [===============================> ] 112.5MB/180.4MB 6cf350721225 Extracting [===========================> ] 54.03MB/98.32MB 1fe734c5fee3 Extracting [====================> ] 13.34MB/32.94MB f270a5fd7930 Extracting [=========================> ] 82.44MB/159.1MB 3e818186829e Downloading [==> ] 2.031MB/50.12MB 92ff7cbea015 Downloading [=====> ] 6.487MB/55.22MB e5110b75bf71 Downloading [==================================================>] 604B/604B e5110b75bf71 Verifying Checksum e5110b75bf71 Download complete a40760cd2625 Extracting [==============================> ] 51.81MB/84.46MB 6cf350721225 Extracting [================================> ] 62.95MB/98.32MB 1fe734c5fee3 Extracting [=========================> ] 16.58MB/32.94MB f270a5fd7930 Extracting [============================> ] 90.24MB/159.1MB ad1782e4d1ef Extracting [===============================> ] 115.3MB/180.4MB a40760cd2625 Extracting [==================================> ] 58.49MB/84.46MB 6cf350721225 Extracting [====================================> ] 71.86MB/98.32MB f270a5fd7930 Extracting [================================> ] 101.9MB/159.1MB 1fe734c5fee3 Extracting [==============================> ] 19.82MB/32.94MB ad1782e4d1ef Extracting [================================> ] 118.7MB/180.4MB a40760cd2625 Extracting [======================================> ] 65.73MB/84.46MB 6cf350721225 Extracting [==========================================> ] 83MB/98.32MB f270a5fd7930 Extracting [==================================> ] 109.2MB/159.1MB 1fe734c5fee3 Extracting [================================> ] 21.63MB/32.94MB ad1782e4d1ef Extracting [=================================> ] 121.4MB/180.4MB a40760cd2625 Extracting [===========================================> ] 73.53MB/84.46MB 6cf350721225 Extracting [===============================================> ] 92.47MB/98.32MB f270a5fd7930 Extracting [====================================> ] 116.4MB/159.1MB ad1782e4d1ef Extracting [==================================> ] 124.2MB/180.4MB 1fe734c5fee3 Extracting [===================================> ] 23.43MB/32.94MB a40760cd2625 Extracting [=================================================> ] 84.12MB/84.46MB a40760cd2625 Extracting [==================================================>] 84.46MB/84.46MB 6cf350721225 Extracting [==================================================>] 98.32MB/98.32MB f270a5fd7930 Extracting [=======================================> ] 124.8MB/159.1MB ad1782e4d1ef Extracting [===================================> ] 127MB/180.4MB a40760cd2625 Pull complete 6cf350721225 Pull complete 114f99593bd8 Extracting [==================================================>] 1.119kB/1.119kB 114f99593bd8 Extracting [==================================================>] 1.119kB/1.119kB de723b4c7ed9 Extracting [==================================================>] 1.297kB/1.297kB de723b4c7ed9 Extracting [==================================================>] 1.297kB/1.297kB 1fe734c5fee3 Extracting [====================================> ] 24.15MB/32.94MB f270a5fd7930 Extracting [=========================================> ] 130.9MB/159.1MB ad1782e4d1ef Extracting [===================================> ] 129.2MB/180.4MB f270a5fd7930 Extracting [===========================================> ] 137MB/159.1MB 1fe734c5fee3 Extracting [=========================================> ] 27.03MB/32.94MB ad1782e4d1ef Extracting [====================================> ] 132MB/180.4MB 1fe734c5fee3 Extracting [=========================================> ] 27.39MB/32.94MB f270a5fd7930 Extracting [=============================================> ] 144.8MB/159.1MB ad1782e4d1ef Extracting [=====================================> ] 134.3MB/180.4MB f270a5fd7930 Extracting [===============================================> ] 151MB/159.1MB 1fe734c5fee3 Extracting [=============================================> ] 30.28MB/32.94MB ad1782e4d1ef Extracting [======================================> ] 137.6MB/180.4MB f270a5fd7930 Extracting [=================================================> ] 157.6MB/159.1MB f270a5fd7930 Extracting [==================================================>] 159.1MB/159.1MB 1fe734c5fee3 Extracting [===============================================> ] 31.36MB/32.94MB de723b4c7ed9 Pull complete 114f99593bd8 Pull complete 1fe734c5fee3 Extracting [==================================================>] 32.94MB/32.94MB 154ef881db4f Downloading [==================================================>] 2.679kB/2.679kB 154ef881db4f Verifying Checksum 154ef881db4f Download complete 3e818186829e Downloading [====> ] 4.57MB/50.12MB 92ff7cbea015 Downloading [========> ] 9.731MB/55.22MB ad1782e4d1ef Extracting [======================================> ] 139.3MB/180.4MB eaafa8ad3e2d Downloading [================================================> ] 3.011kB/3.09kB eaafa8ad3e2d Downloading [==================================================>] 3.09kB/3.09kB eaafa8ad3e2d Verifying Checksum eaafa8ad3e2d Download complete fea56ff08967 Downloading [=====================================> ] 3.011kB/4.022kB fea56ff08967 Downloading [==================================================>] 4.022kB/4.022kB fea56ff08967 Download complete 6e62e059c561 Downloading [==================================================>] 1.44kB/1.44kB 6e62e059c561 Verifying Checksum 6e62e059c561 Download complete 443ffcabdce2 Downloading [=> ] 3.009kB/137.7kB 443ffcabdce2 Downloading [==================================================>] 137.7kB/137.7kB 443ffcabdce2 Verifying Checksum 443ffcabdce2 Download complete 3e818186829e Downloading [===============> ] 15.24MB/50.12MB 92ff7cbea015 Downloading [====================> ] 22.71MB/55.22MB d59855f97034 Downloading [==================================================>] 100B/100B d59855f97034 Verifying Checksum d59855f97034 Download complete f270a5fd7930 Pull complete 1fe734c5fee3 Pull complete 9038eaba24f8 Extracting [==================================================>] 1.153kB/1.153kB 9038eaba24f8 Extracting [==================================================>] 1.153kB/1.153kB pap Pulled c8e6f0452a8e Extracting [==================================================>] 1.076kB/1.076kB b32c911ea1d7 Downloading [==================================================>] 718B/718B b32c911ea1d7 Download complete api Pulled c8e6f0452a8e Extracting [==================================================>] 1.076kB/1.076kB ad1782e4d1ef Extracting [=======================================> ] 142MB/180.4MB 10ac4908093d Downloading [> ] 310.2kB/30.43MB 3e818186829e Downloading [=============================> ] 29.46MB/50.12MB 92ff7cbea015 Downloading [==================================> ] 37.85MB/55.22MB ad1782e4d1ef Extracting [========================================> ] 147.1MB/180.4MB 10ac4908093d Downloading [=======> ] 4.668MB/30.43MB 9038eaba24f8 Pull complete c8e6f0452a8e Pull complete 04a7796b82ca Extracting [==================================================>] 1.127kB/1.127kB 04a7796b82ca Extracting [==================================================>] 1.127kB/1.127kB 0143f8517101 Extracting [==================================================>] 5.324kB/5.324kB 0143f8517101 Extracting [==================================================>] 5.324kB/5.324kB 3e818186829e Downloading [=========================================> ] 41.14MB/50.12MB 92ff7cbea015 Downloading [============================================> ] 49.2MB/55.22MB ad1782e4d1ef Extracting [==========================================> ] 151.5MB/180.4MB 92ff7cbea015 Verifying Checksum 92ff7cbea015 Download complete 3e818186829e Verifying Checksum 3e818186829e Download complete 10ac4908093d Downloading [=====================> ] 13.38MB/30.43MB 44779101e748 Downloading [==================================================>] 1.744kB/1.744kB 44779101e748 Verifying Checksum 44779101e748 Download complete a721db3e3f3d Downloading [> ] 64.45kB/5.526MB 1850a929b84a Downloading [==================================================>] 149B/149B 1850a929b84a Verifying Checksum 1850a929b84a Download complete 397a918c7da3 Download complete 0143f8517101 Pull complete ee69cc1a77e2 Extracting [==================================================>] 5.312kB/5.312kB ee69cc1a77e2 Extracting [==================================================>] 5.312kB/5.312kB 806be17e856d Downloading [> ] 539.6kB/89.72MB 92ff7cbea015 Extracting [> ] 557.1kB/55.22MB 04a7796b82ca Pull complete a721db3e3f3d Verifying Checksum a721db3e3f3d Download complete ad1782e4d1ef Extracting [==========================================> ] 154.9MB/180.4MB simulator Pulled 10ac4908093d Downloading [===============================================> ] 28.64MB/30.43MB 634de6c90876 Downloading [===========================================> ] 3.011kB/3.49kB 634de6c90876 Download complete 10ac4908093d Download complete cd00854cfb1a Downloading [=====================> ] 3.011kB/6.971kB cd00854cfb1a Downloading [==================================================>] 6.971kB/6.971kB cd00854cfb1a Download complete 4abcf2066143 Downloading [> ] 48.06kB/3.409MB 5c277da153ce Downloading [==================================================>] 141B/141B 5c277da153ce Verifying Checksum 5c277da153ce Download complete 85ed0bf0f127 Downloading [> ] 48.06kB/3.184MB 806be17e856d Downloading [======> ] 11.35MB/89.72MB 92ff7cbea015 Extracting [===> ] 3.342MB/55.22MB 4abcf2066143 Verifying Checksum 4abcf2066143 Download complete 4abcf2066143 Extracting [> ] 65.54kB/3.409MB ee69cc1a77e2 Pull complete 81667b400b57 Extracting [==================================================>] 1.034kB/1.034kB 81667b400b57 Extracting [==================================================>] 1.034kB/1.034kB a59a4ddf8225 Downloading [> ] 48.06kB/4.333MB 10ac4908093d Extracting [> ] 327.7kB/30.43MB 85ed0bf0f127 Verifying Checksum 85ed0bf0f127 Download complete ad1782e4d1ef Extracting [============================================> ] 158.8MB/180.4MB 2d9ac7a96b08 Downloading [===> ] 3.01kB/47.96kB 2d9ac7a96b08 Downloading [==================================================>] 47.96kB/47.96kB 2d9ac7a96b08 Verifying Checksum 2d9ac7a96b08 Download complete c9a66980b76c Downloading [======> ] 3.01kB/23.82kB c9a66980b76c Downloading [==================================================>] 23.82kB/23.82kB c9a66980b76c Verifying Checksum c9a66980b76c Download complete 806be17e856d Downloading [==============> ] 25.41MB/89.72MB a59a4ddf8225 Verifying Checksum a59a4ddf8225 Download complete 562cf3de6818 Downloading [> ] 539.6kB/61.52MB 4abcf2066143 Extracting [===============> ] 1.049MB/3.409MB 92ff7cbea015 Extracting [=====> ] 5.571MB/55.22MB bfcc9123594e Downloading [> ] 506.8kB/50.57MB 10ac4908093d Extracting [=====> ] 3.277MB/30.43MB ad1782e4d1ef Extracting [============================================> ] 161MB/180.4MB 4abcf2066143 Extracting [==================================================>] 3.409MB/3.409MB 806be17e856d Downloading [======================> ] 40.01MB/89.72MB 562cf3de6818 Downloading [======> ] 8.109MB/61.52MB 81667b400b57 Pull complete 4abcf2066143 Pull complete ec3b6d0cc414 Extracting [==================================================>] 1.036kB/1.036kB ec3b6d0cc414 Extracting [==================================================>] 1.036kB/1.036kB 5c277da153ce Extracting [==================================================>] 141B/141B 5c277da153ce Extracting [==================================================>] 141B/141B bfcc9123594e Downloading [========> ] 8.633MB/50.57MB 10ac4908093d Extracting [==========> ] 6.226MB/30.43MB 92ff7cbea015 Extracting [========> ] 8.913MB/55.22MB ad1782e4d1ef Extracting [=============================================> ] 163.2MB/180.4MB 806be17e856d Downloading [=============================> ] 53.53MB/89.72MB 562cf3de6818 Downloading [===============> ] 18.92MB/61.52MB bfcc9123594e Downloading [==================> ] 18.79MB/50.57MB 10ac4908093d Extracting [=============> ] 8.52MB/30.43MB 92ff7cbea015 Extracting [=========> ] 10.58MB/55.22MB 806be17e856d Downloading [====================================> ] 65.96MB/89.72MB ec3b6d0cc414 Pull complete 5c277da153ce Pull complete 562cf3de6818 Downloading [========================> ] 29.74MB/61.52MB a8d3998ab21c Extracting [==================================================>] 13.9kB/13.9kB a8d3998ab21c Extracting [==================================================>] 13.9kB/13.9kB ad1782e4d1ef Extracting [=============================================> ] 164.9MB/180.4MB 85ed0bf0f127 Extracting [> ] 32.77kB/3.184MB bfcc9123594e Downloading [=============================> ] 29.46MB/50.57MB 10ac4908093d Extracting [=================> ] 10.49MB/30.43MB 92ff7cbea015 Extracting [============> ] 13.37MB/55.22MB 806be17e856d Downloading [==========================================> ] 76.77MB/89.72MB 562cf3de6818 Downloading [=================================> ] 41.63MB/61.52MB ad1782e4d1ef Extracting [==============================================> ] 169.3MB/180.4MB bfcc9123594e Downloading [=======================================> ] 39.62MB/50.57MB 85ed0bf0f127 Extracting [=====> ] 327.7kB/3.184MB 92ff7cbea015 Extracting [=============> ] 14.48MB/55.22MB 806be17e856d Downloading [=================================================> ] 89.21MB/89.72MB 10ac4908093d Extracting [======================> ] 13.43MB/30.43MB a8d3998ab21c Pull complete 562cf3de6818 Downloading [============================================> ] 54.61MB/61.52MB 89d6e2ec6372 Extracting [==================================================>] 13.79kB/13.79kB 89d6e2ec6372 Extracting [==================================================>] 13.79kB/13.79kB bfcc9123594e Verifying Checksum bfcc9123594e Download complete 85ed0bf0f127 Extracting [=================> ] 1.114MB/3.184MB f73d5405641d Downloading [============> ] 3.01kB/11.92kB f73d5405641d Download complete ad1782e4d1ef Extracting [===============================================> ] 171.6MB/180.4MB 806be17e856d Verifying Checksum 806be17e856d Download complete 562cf3de6818 Verifying Checksum 562cf3de6818 Download complete 0c9bbf800250 Downloading [==================================================>] 1.225kB/1.225kB 0c9bbf800250 Verifying Checksum 0c9bbf800250 Download complete 92ff7cbea015 Extracting [===============> ] 16.71MB/55.22MB 10ac4908093d Extracting [===========================> ] 17.04MB/30.43MB 85ed0bf0f127 Extracting [================================================> ] 3.113MB/3.184MB 4798a7e93601 Downloading [> ] 376.1kB/37.11MB 4798a7e93601 Downloading [> ] 376.1kB/37.11MB 016e383f3f47 Downloading [================================> ] 720B/1.102kB 016e383f3f47 Downloading [==================================================>] 1.102kB/1.102kB 016e383f3f47 Downloading [================================> ] 720B/1.102kB 016e383f3f47 Downloading [==================================================>] 1.102kB/1.102kB 016e383f3f47 Verifying Checksum 016e383f3f47 Download complete 016e383f3f47 Verifying Checksum 016e383f3f47 Download complete ad1782e4d1ef Extracting [================================================> ] 173.2MB/180.4MB a453f30e82bf Downloading [> ] 539.9kB/257.5MB a453f30e82bf Downloading [> ] 539.9kB/257.5MB 85ed0bf0f127 Extracting [==================================================>] 3.184MB/3.184MB 10ac4908093d Extracting [=================================> ] 20.64MB/30.43MB 92ff7cbea015 Extracting [==================> ] 20.61MB/55.22MB 4798a7e93601 Downloading [============> ] 9.448MB/37.11MB 4798a7e93601 Downloading [============> ] 9.448MB/37.11MB f7d27dafad0a Downloading [> ] 86.68kB/8.351MB f7d27dafad0a Downloading [> ] 86.68kB/8.351MB 89d6e2ec6372 Pull complete 80096f8bb25e Extracting [==================================================>] 2.238kB/2.238kB 80096f8bb25e Extracting [==================================================>] 2.238kB/2.238kB a453f30e82bf Downloading [=> ] 9.693MB/257.5MB a453f30e82bf Downloading [=> ] 9.693MB/257.5MB 10ac4908093d Extracting [=====================================> ] 22.61MB/30.43MB ad1782e4d1ef Extracting [================================================> ] 174.9MB/180.4MB 4798a7e93601 Downloading [============================> ] 21.13MB/37.11MB 4798a7e93601 Downloading [============================> ] 21.13MB/37.11MB f7d27dafad0a Verifying Checksum f7d27dafad0a Download complete f7d27dafad0a Verifying Checksum f7d27dafad0a Download complete 92ff7cbea015 Extracting [=====================> ] 23.4MB/55.22MB a453f30e82bf Downloading [====> ] 22.04MB/257.5MB a453f30e82bf Downloading [====> ] 22.04MB/257.5MB 85ed0bf0f127 Pull complete a59a4ddf8225 Extracting [> ] 65.54kB/4.333MB 10ac4908093d Extracting [=========================================> ] 25.23MB/30.43MB ad1782e4d1ef Extracting [================================================> ] 176.6MB/180.4MB 80096f8bb25e Pull complete cbd359ebc87d Extracting [==================================================>] 2.23kB/2.23kB cbd359ebc87d Extracting [==================================================>] 2.23kB/2.23kB 4798a7e93601 Downloading [===============================================> ] 35.11MB/37.11MB 4798a7e93601 Downloading [===============================================> ] 35.11MB/37.11MB 56ccc8be1ca0 Downloading [=> ] 687B/21.29kB 56ccc8be1ca0 Downloading [=> ] 687B/21.29kB 56ccc8be1ca0 Verifying Checksum 56ccc8be1ca0 Download complete 56ccc8be1ca0 Verifying Checksum 56ccc8be1ca0 Download complete 4798a7e93601 Verifying Checksum 4798a7e93601 Verifying Checksum 4798a7e93601 Download complete 4798a7e93601 Download complete a453f30e82bf Downloading [======> ] 35.46MB/257.5MB a453f30e82bf Downloading [======> ] 35.46MB/257.5MB 92ff7cbea015 Extracting [========================> ] 26.74MB/55.22MB a59a4ddf8225 Extracting [===> ] 262.1kB/4.333MB 10ac4908093d Extracting [===========================================> ] 26.54MB/30.43MB ad1782e4d1ef Extracting [=================================================> ] 178.3MB/180.4MB 1c6e35a73ed7 Downloading [================================> ] 720B/1.105kB 1c6e35a73ed7 Downloading [==================================================>] 1.105kB/1.105kB 1c6e35a73ed7 Downloading [==================================================>] 1.105kB/1.105kB 1c6e35a73ed7 Verifying Checksum 1c6e35a73ed7 Verifying Checksum 1c6e35a73ed7 Download complete 1c6e35a73ed7 Download complete f77f01ac624c Downloading [> ] 442.4kB/43.2MB f77f01ac624c Downloading [> ] 442.4kB/43.2MB a453f30e82bf Downloading [========> ] 43.47MB/257.5MB a453f30e82bf Downloading [========> ] 43.47MB/257.5MB 4798a7e93601 Extracting [> ] 393.2kB/37.11MB 4798a7e93601 Extracting [> ] 393.2kB/37.11MB 92ff7cbea015 Extracting [=========================> ] 28.41MB/55.22MB a59a4ddf8225 Extracting [================================> ] 2.818MB/4.333MB 10ac4908093d Extracting [==============================================> ] 28.51MB/30.43MB aa5e151b62ff Downloading [=========================================> ] 709B/853B aa5e151b62ff Downloading [=========================================> ] 709B/853B aa5e151b62ff Downloading [==================================================>] 853B/853B aa5e151b62ff Verifying Checksum aa5e151b62ff Download complete aa5e151b62ff Downloading [==================================================>] 853B/853B aa5e151b62ff Verifying Checksum aa5e151b62ff Download complete f77f01ac624c Downloading [===========> ] 9.726MB/43.2MB f77f01ac624c Downloading [===========> ] 9.726MB/43.2MB cbd359ebc87d Pull complete 4798a7e93601 Extracting [===> ] 2.359MB/37.11MB 4798a7e93601 Extracting [===> ] 2.359MB/37.11MB ad1782e4d1ef Extracting [=================================================> ] 179.4MB/180.4MB a59a4ddf8225 Extracting [==================================================>] 4.333MB/4.333MB a453f30e82bf Downloading [==========> ] 55.32MB/257.5MB a453f30e82bf Downloading [==========> ] 55.32MB/257.5MB policy-db-migrator Pulled 92ff7cbea015 Extracting [==============================> ] 33.42MB/55.22MB f77f01ac624c Downloading [====================> ] 17.69MB/43.2MB 262d375318c3 Downloading [==================================================>] 98B/98B f77f01ac624c Downloading [====================> ] 17.69MB/43.2MB 262d375318c3 Downloading [==================================================>] 98B/98B 262d375318c3 Verifying Checksum 262d375318c3 Download complete 262d375318c3 Download complete a59a4ddf8225 Pull complete 10ac4908093d Extracting [===============================================> ] 29.16MB/30.43MB 2d9ac7a96b08 Extracting [==================================> ] 32.77kB/47.96kB 4798a7e93601 Extracting [=====> ] 4.325MB/37.11MB 4798a7e93601 Extracting [=====> ] 4.325MB/37.11MB 2d9ac7a96b08 Extracting [==================================================>] 47.96kB/47.96kB a453f30e82bf Downloading [============> ] 63.92MB/257.5MB a453f30e82bf Downloading [============> ] 63.92MB/257.5MB 92ff7cbea015 Extracting [====================================> ] 40.11MB/55.22MB ad1782e4d1ef Extracting [==================================================>] 180.4MB/180.4MB 28a7d18ebda4 Downloading [==================================================>] 173B/173B 28a7d18ebda4 Downloading [==================================================>] 173B/173B 28a7d18ebda4 Verifying Checksum 28a7d18ebda4 Verifying Checksum 28a7d18ebda4 Download complete 28a7d18ebda4 Download complete f77f01ac624c Downloading [====================================> ] 31.36MB/43.2MB f77f01ac624c Downloading [====================================> ] 31.36MB/43.2MB 10ac4908093d Extracting [=================================================> ] 30.15MB/30.43MB a453f30e82bf Downloading [===============> ] 79.48MB/257.5MB a453f30e82bf Downloading [===============> ] 79.48MB/257.5MB 92ff7cbea015 Extracting [============================================> ] 49.58MB/55.22MB 4798a7e93601 Extracting [=========> ] 6.685MB/37.11MB 4798a7e93601 Extracting [=========> ] 6.685MB/37.11MB bdc615dfc787 Downloading [> ] 2.738kB/230.6kB bdc615dfc787 Downloading [> ] 2.738kB/230.6kB bdc615dfc787 Verifying Checksum bdc615dfc787 Verifying Checksum bdc615dfc787 Download complete bdc615dfc787 Download complete f77f01ac624c Downloading [=================================================> ] 42.37MB/43.2MB f77f01ac624c Downloading [=================================================> ] 42.37MB/43.2MB f77f01ac624c Verifying Checksum f77f01ac624c Download complete f77f01ac624c Verifying Checksum f77f01ac624c Download complete 10ac4908093d Extracting [==================================================>] 30.43MB/30.43MB a453f30e82bf Downloading [==================> ] 93.45MB/257.5MB a453f30e82bf Downloading [==================> ] 93.45MB/257.5MB 2d9ac7a96b08 Pull complete c9a66980b76c Extracting [==================================================>] 23.82kB/23.82kB c9a66980b76c Extracting [==================================================>] 23.82kB/23.82kB ad1782e4d1ef Pull complete 4798a7e93601 Extracting [=============> ] 10.22MB/37.11MB 4798a7e93601 Extracting [=============> ] 10.22MB/37.11MB bc8105c6553b Extracting [===================> ] 32.77kB/84.13kB ab973a5038b6 Downloading [> ] 535.8kB/121.6MB bc8105c6553b Extracting [==================================================>] 84.13kB/84.13kB bc8105c6553b Extracting [==================================================>] 84.13kB/84.13kB 5aee3e0528f7 Downloading [==========> ] 720B/3.445kB 5aee3e0528f7 Downloading [==================================================>] 3.445kB/3.445kB 5aee3e0528f7 Verifying Checksum 5aee3e0528f7 Download complete 92ff7cbea015 Extracting [=================================================> ] 55.15MB/55.22MB 92ff7cbea015 Extracting [==================================================>] 55.22MB/55.22MB a453f30e82bf Downloading [====================> ] 107.9MB/257.5MB a453f30e82bf Downloading [====================> ] 107.9MB/257.5MB 10ac4908093d Pull complete 44779101e748 Extracting [==================================================>] 1.744kB/1.744kB 44779101e748 Extracting [==================================================>] 1.744kB/1.744kB 92ff7cbea015 Pull complete 4798a7e93601 Extracting [==================> ] 13.37MB/37.11MB 4798a7e93601 Extracting [==================> ] 13.37MB/37.11MB c9a66980b76c Pull complete ab973a5038b6 Downloading [===> ] 8.601MB/121.6MB 33966fd36306 Downloading [> ] 538.9kB/121.6MB a453f30e82bf Downloading [======================> ] 115.9MB/257.5MB a453f30e82bf Downloading [======================> ] 115.9MB/257.5MB bc8105c6553b Pull complete 929241f867bb Extracting [==================================================>] 92B/92B 929241f867bb Extracting [==================================================>] 92B/92B 4798a7e93601 Extracting [======================> ] 16.52MB/37.11MB 4798a7e93601 Extracting [======================> ] 16.52MB/37.11MB 44779101e748 Pull complete a721db3e3f3d Extracting [> ] 65.54kB/5.526MB 33966fd36306 Downloading [=====> ] 12.87MB/121.6MB ab973a5038b6 Downloading [=========> ] 24.21MB/121.6MB a453f30e82bf Downloading [=========================> ] 131MB/257.5MB a453f30e82bf Downloading [=========================> ] 131MB/257.5MB 3e818186829e Extracting [> ] 524.3kB/50.12MB 562cf3de6818 Extracting [> ] 557.1kB/61.52MB 929241f867bb Pull complete 37728a7352e6 Extracting [==================================================>] 92B/92B 37728a7352e6 Extracting [==================================================>] 92B/92B 33966fd36306 Downloading [===========> ] 27.9MB/121.6MB ab973a5038b6 Downloading [===============> ] 37.61MB/121.6MB 4798a7e93601 Extracting [============================> ] 20.84MB/37.11MB 4798a7e93601 Extracting [============================> ] 20.84MB/37.11MB a721db3e3f3d Extracting [==> ] 262.1kB/5.526MB a453f30e82bf Downloading [===========================> ] 142.8MB/257.5MB a453f30e82bf Downloading [===========================> ] 142.8MB/257.5MB 3e818186829e Extracting [===> ] 3.146MB/50.12MB 562cf3de6818 Extracting [==> ] 3.342MB/61.52MB 33966fd36306 Downloading [================> ] 39.66MB/121.6MB ab973a5038b6 Downloading [====================> ] 50.01MB/121.6MB a721db3e3f3d Extracting [=======================> ] 2.621MB/5.526MB 4798a7e93601 Extracting [================================> ] 23.99MB/37.11MB 4798a7e93601 Extracting [================================> ] 23.99MB/37.11MB a453f30e82bf Downloading [=============================> ] 154MB/257.5MB a453f30e82bf Downloading [=============================> ] 154MB/257.5MB 562cf3de6818 Extracting [====> ] 5.571MB/61.52MB 37728a7352e6 Pull complete 3f40c7aa46a6 Extracting [==================================================>] 302B/302B 3f40c7aa46a6 Extracting [==================================================>] 302B/302B 3e818186829e Extracting [======> ] 6.291MB/50.12MB 33966fd36306 Downloading [==================> ] 43.96MB/121.6MB ab973a5038b6 Downloading [========================> ] 59.71MB/121.6MB a721db3e3f3d Extracting [========================================> ] 4.456MB/5.526MB 4798a7e93601 Extracting [===================================> ] 26.35MB/37.11MB 4798a7e93601 Extracting [===================================> ] 26.35MB/37.11MB a453f30e82bf Downloading [===============================> ] 164.2MB/257.5MB a453f30e82bf Downloading [===============================> ] 164.2MB/257.5MB 562cf3de6818 Extracting [======> ] 8.356MB/61.52MB 3e818186829e Extracting [========> ] 8.389MB/50.12MB ab973a5038b6 Downloading [============================> ] 68.89MB/121.6MB 33966fd36306 Downloading [======================> ] 55.24MB/121.6MB a721db3e3f3d Extracting [==========================================> ] 4.719MB/5.526MB 4798a7e93601 Extracting [=======================================> ] 29.1MB/37.11MB 4798a7e93601 Extracting [=======================================> ] 29.1MB/37.11MB a453f30e82bf Downloading [==================================> ] 177MB/257.5MB a453f30e82bf Downloading [==================================> ] 177MB/257.5MB 3f40c7aa46a6 Pull complete 3e818186829e Extracting [==========> ] 10.49MB/50.12MB 562cf3de6818 Extracting [=======> ] 9.47MB/61.52MB ab973a5038b6 Downloading [================================> ] 78.58MB/121.6MB 33966fd36306 Downloading [=========================> ] 62.76MB/121.6MB a721db3e3f3d Extracting [==============================================> ] 5.112MB/5.526MB 4798a7e93601 Extracting [=========================================> ] 30.67MB/37.11MB 4798a7e93601 Extracting [=========================================> ] 30.67MB/37.11MB a721db3e3f3d Extracting [==================================================>] 5.526MB/5.526MB a453f30e82bf Downloading [==================================> ] 178.1MB/257.5MB a453f30e82bf Downloading [==================================> ] 178.1MB/257.5MB 3e818186829e Extracting [============> ] 12.58MB/50.12MB ab973a5038b6 Downloading [=====================================> ] 90.93MB/121.6MB 33966fd36306 Downloading [=============================> ] 70.81MB/121.6MB a721db3e3f3d Pull complete 1850a929b84a Extracting [==================================================>] 149B/149B 1850a929b84a Extracting [==================================================>] 149B/149B 4798a7e93601 Extracting [===========================================> ] 32.64MB/37.11MB 4798a7e93601 Extracting [===========================================> ] 32.64MB/37.11MB a453f30e82bf Downloading [====================================> ] 186.7MB/257.5MB a453f30e82bf Downloading [====================================> ] 186.7MB/257.5MB 562cf3de6818 Extracting [=========> ] 11.14MB/61.52MB 353af139d39e Extracting [> ] 557.1kB/246.5MB 33966fd36306 Downloading [==================================> ] 84.27MB/121.6MB ab973a5038b6 Downloading [==========================================> ] 104.3MB/121.6MB 3e818186829e Extracting [===============> ] 15.2MB/50.12MB 4798a7e93601 Extracting [==============================================> ] 34.21MB/37.11MB 4798a7e93601 Extracting [==============================================> ] 34.21MB/37.11MB a453f30e82bf Downloading [======================================> ] 200.1MB/257.5MB a453f30e82bf Downloading [======================================> ] 200.1MB/257.5MB 562cf3de6818 Extracting [=========> ] 12.26MB/61.52MB 33966fd36306 Downloading [=======================================> ] 95.57MB/121.6MB ab973a5038b6 Downloading [===============================================> ] 116.7MB/121.6MB 353af139d39e Extracting [> ] 1.114MB/246.5MB 3e818186829e Extracting [=================> ] 17.3MB/50.12MB 4798a7e93601 Extracting [================================================> ] 36.18MB/37.11MB 4798a7e93601 Extracting [================================================> ] 36.18MB/37.11MB a453f30e82bf Downloading [========================================> ] 209.1MB/257.5MB a453f30e82bf Downloading [========================================> ] 209.1MB/257.5MB 1850a929b84a Pull complete 397a918c7da3 Extracting [==================================================>] 327B/327B 397a918c7da3 Extracting [==================================================>] 327B/327B 562cf3de6818 Extracting [============> ] 15.6MB/61.52MB ab973a5038b6 Verifying Checksum ab973a5038b6 Download complete 4798a7e93601 Extracting [==================================================>] 37.11MB/37.11MB 4798a7e93601 Extracting [==================================================>] 37.11MB/37.11MB 33966fd36306 Downloading [===========================================> ] 105.2MB/121.6MB 353af139d39e Extracting [==> ] 10.03MB/246.5MB 8b4455fb60b9 Downloading [=========> ] 720B/3.627kB 8b4455fb60b9 Downloading [==================================================>] 3.627kB/3.627kB 8b4455fb60b9 Verifying Checksum 8b4455fb60b9 Download complete 33966fd36306 Downloading [===============================================> ] 116MB/121.6MB 562cf3de6818 Extracting [===============> ] 19.5MB/61.52MB 4798a7e93601 Pull complete 4798a7e93601 Pull complete 3e818186829e Extracting [==================> ] 18.35MB/50.12MB a453f30e82bf Downloading [==========================================> ] 219.3MB/257.5MB a453f30e82bf Downloading [==========================================> ] 219.3MB/257.5MB 353af139d39e Extracting [==> ] 14.48MB/246.5MB 33966fd36306 Verifying Checksum 33966fd36306 Download complete 397a918c7da3 Pull complete 562cf3de6818 Extracting [===================> ] 23.4MB/61.52MB a453f30e82bf Downloading [============================================> ] 231.7MB/257.5MB a453f30e82bf Downloading [============================================> ] 231.7MB/257.5MB 3e818186829e Extracting [=====================> ] 21.5MB/50.12MB 353af139d39e Extracting [====> ] 23.95MB/246.5MB 806be17e856d Extracting [> ] 557.1kB/89.72MB a453f30e82bf Downloading [================================================> ] 247.2MB/257.5MB a453f30e82bf Downloading [================================================> ] 247.2MB/257.5MB 562cf3de6818 Extracting [=====================> ] 26.74MB/61.52MB 3e818186829e Extracting [========================> ] 24.64MB/50.12MB 353af139d39e Extracting [======> ] 30.64MB/246.5MB a453f30e82bf Verifying Checksum a453f30e82bf Download complete a453f30e82bf Download complete 806be17e856d Extracting [=> ] 3.342MB/89.72MB 3e818186829e Extracting [====================================> ] 36.7MB/50.12MB 353af139d39e Extracting [========> ] 40.11MB/246.5MB 562cf3de6818 Extracting [=========================> ] 31.2MB/61.52MB 806be17e856d Extracting [===> ] 5.571MB/89.72MB 3e818186829e Extracting [============================================> ] 44.56MB/50.12MB 353af139d39e Extracting [=========> ] 45.12MB/246.5MB 562cf3de6818 Extracting [============================> ] 35.09MB/61.52MB a453f30e82bf Extracting [> ] 557.1kB/257.5MB a453f30e82bf Extracting [> ] 557.1kB/257.5MB 806be17e856d Extracting [=====> ] 9.47MB/89.72MB 353af139d39e Extracting [==========> ] 52.36MB/246.5MB 3e818186829e Extracting [=================================================> ] 49.28MB/50.12MB 562cf3de6818 Extracting [===============================> ] 38.44MB/61.52MB a453f30e82bf Extracting [=> ] 10.03MB/257.5MB a453f30e82bf Extracting [=> ] 10.03MB/257.5MB 806be17e856d Extracting [======> ] 12.26MB/89.72MB 353af139d39e Extracting [============> ] 60.72MB/246.5MB 3e818186829e Extracting [==================================================>] 50.12MB/50.12MB 562cf3de6818 Extracting [==================================> ] 42.89MB/61.52MB a453f30e82bf Extracting [===> ] 19.5MB/257.5MB a453f30e82bf Extracting [===> ] 19.5MB/257.5MB 562cf3de6818 Extracting [===================================> ] 43.45MB/61.52MB 353af139d39e Extracting [=============> ] 67.96MB/246.5MB 3e818186829e Pull complete a453f30e82bf Extracting [====> ] 22.84MB/257.5MB a453f30e82bf Extracting [====> ] 22.84MB/257.5MB e5110b75bf71 Extracting [==================================================>] 604B/604B e5110b75bf71 Extracting [==================================================>] 604B/604B 806be17e856d Extracting [========> ] 15.04MB/89.72MB 562cf3de6818 Extracting [=====================================> ] 45.68MB/61.52MB 353af139d39e Extracting [==============> ] 71.86MB/246.5MB a453f30e82bf Extracting [=====> ] 26.18MB/257.5MB a453f30e82bf Extracting [=====> ] 26.18MB/257.5MB 806be17e856d Extracting [===========> ] 20.05MB/89.72MB 353af139d39e Extracting [===============> ] 76.32MB/246.5MB e5110b75bf71 Pull complete 154ef881db4f Extracting [==================================================>] 2.679kB/2.679kB 154ef881db4f Extracting [==================================================>] 2.679kB/2.679kB 562cf3de6818 Extracting [=======================================> ] 48.46MB/61.52MB 806be17e856d Extracting [============> ] 22.28MB/89.72MB a453f30e82bf Extracting [=====> ] 27.85MB/257.5MB a453f30e82bf Extracting [=====> ] 27.85MB/257.5MB 353af139d39e Extracting [=================> ] 84.67MB/246.5MB 562cf3de6818 Extracting [=========================================> ] 51.25MB/61.52MB 806be17e856d Extracting [=============> ] 24.51MB/89.72MB a453f30e82bf Extracting [=======> ] 38.99MB/257.5MB a453f30e82bf Extracting [=======> ] 38.99MB/257.5MB 353af139d39e Extracting [===================> ] 94.14MB/246.5MB 562cf3de6818 Extracting [===========================================> ] 54.03MB/61.52MB 806be17e856d Extracting [==============> ] 26.74MB/89.72MB 353af139d39e Extracting [===================> ] 98.6MB/246.5MB 562cf3de6818 Extracting [============================================> ] 54.59MB/61.52MB a453f30e82bf Extracting [=========> ] 49.02MB/257.5MB a453f30e82bf Extracting [=========> ] 49.02MB/257.5MB 806be17e856d Extracting [================> ] 28.97MB/89.72MB 353af139d39e Extracting [=====================> ] 107MB/246.5MB 562cf3de6818 Extracting [==============================================> ] 57.38MB/61.52MB a453f30e82bf Extracting [===========> ] 56.82MB/257.5MB a453f30e82bf Extracting [===========> ] 56.82MB/257.5MB 353af139d39e Extracting [=======================> ] 114.8MB/246.5MB 806be17e856d Extracting [==================> ] 32.31MB/89.72MB a453f30e82bf Extracting [============> ] 65.73MB/257.5MB a453f30e82bf Extracting [============> ] 65.73MB/257.5MB 562cf3de6818 Extracting [================================================> ] 59.6MB/61.52MB 353af139d39e Extracting [========================> ] 119.8MB/246.5MB a453f30e82bf Extracting [=============> ] 70.75MB/257.5MB a453f30e82bf Extracting [=============> ] 70.75MB/257.5MB 806be17e856d Extracting [===================> ] 34.54MB/89.72MB 562cf3de6818 Extracting [==================================================>] 61.52MB/61.52MB 154ef881db4f Pull complete 353af139d39e Extracting [==========================> ] 129.8MB/246.5MB eaafa8ad3e2d Extracting [==================================================>] 3.09kB/3.09kB eaafa8ad3e2d Extracting [==================================================>] 3.09kB/3.09kB a453f30e82bf Extracting [===============> ] 80.77MB/257.5MB a453f30e82bf Extracting [===============> ] 80.77MB/257.5MB 806be17e856d Extracting [====================> ] 36.77MB/89.72MB 353af139d39e Extracting [===========================> ] 134.3MB/246.5MB a453f30e82bf Extracting [================> ] 86.34MB/257.5MB a453f30e82bf Extracting [================> ] 86.34MB/257.5MB 806be17e856d Extracting [=====================> ] 38.99MB/89.72MB 562cf3de6818 Pull complete 353af139d39e Extracting [=============================> ] 144.3MB/246.5MB a453f30e82bf Extracting [==================> ] 94.14MB/257.5MB a453f30e82bf Extracting [==================> ] 94.14MB/257.5MB 806be17e856d Extracting [=======================> ] 42.89MB/89.72MB 353af139d39e Extracting [===============================> ] 154.9MB/246.5MB a453f30e82bf Extracting [===================> ] 101.4MB/257.5MB a453f30e82bf Extracting [===================> ] 101.4MB/257.5MB 353af139d39e Extracting [================================> ] 159.3MB/246.5MB a453f30e82bf Extracting [====================> ] 103.1MB/257.5MB a453f30e82bf Extracting [====================> ] 103.1MB/257.5MB 353af139d39e Extracting [=================================> ] 167.1MB/246.5MB 806be17e856d Extracting [========================> ] 44.56MB/89.72MB a453f30e82bf Extracting [=====================> ] 108.6MB/257.5MB a453f30e82bf Extracting [=====================> ] 108.6MB/257.5MB 353af139d39e Extracting [===================================> ] 176MB/246.5MB a453f30e82bf Extracting [=====================> ] 111.4MB/257.5MB a453f30e82bf Extracting [=====================> ] 111.4MB/257.5MB 806be17e856d Extracting [==========================> ] 46.79MB/89.72MB 353af139d39e Extracting [====================================> ] 179.4MB/246.5MB a453f30e82bf Extracting [=====================> ] 112MB/257.5MB a453f30e82bf Extracting [=====================> ] 112MB/257.5MB 806be17e856d Extracting [==========================> ] 47.35MB/89.72MB bfcc9123594e Extracting [> ] 524.3kB/50.57MB a453f30e82bf Extracting [======================> ] 114.8MB/257.5MB a453f30e82bf Extracting [======================> ] 114.8MB/257.5MB 353af139d39e Extracting [======================================> ] 191.6MB/246.5MB 806be17e856d Extracting [============================> ] 50.69MB/89.72MB bfcc9123594e Extracting [=> ] 1.049MB/50.57MB a453f30e82bf Extracting [======================> ] 117MB/257.5MB a453f30e82bf Extracting [======================> ] 117MB/257.5MB 353af139d39e Extracting [========================================> ] 198.9MB/246.5MB 806be17e856d Extracting [==============================> ] 54.59MB/89.72MB 353af139d39e Extracting [=========================================> ] 206.7MB/246.5MB a453f30e82bf Extracting [=======================> ] 120.3MB/257.5MB a453f30e82bf Extracting [=======================> ] 120.3MB/257.5MB bfcc9123594e Extracting [==> ] 2.097MB/50.57MB 806be17e856d Extracting [================================> ] 59.05MB/89.72MB 353af139d39e Extracting [===========================================> ] 214.5MB/246.5MB eaafa8ad3e2d Pull complete 806be17e856d Extracting [==================================> ] 62.39MB/89.72MB a453f30e82bf Extracting [=======================> ] 123.1MB/257.5MB a453f30e82bf Extracting [=======================> ] 123.1MB/257.5MB 353af139d39e Extracting [============================================> ] 218.9MB/246.5MB bfcc9123594e Extracting [===> ] 3.67MB/50.57MB a453f30e82bf Extracting [========================> ] 124.8MB/257.5MB a453f30e82bf Extracting [========================> ] 124.8MB/257.5MB 806be17e856d Extracting [====================================> ] 65.73MB/89.72MB 353af139d39e Extracting [=============================================> ] 224.5MB/246.5MB bfcc9123594e Extracting [====> ] 4.194MB/50.57MB a453f30e82bf Extracting [========================> ] 127.6MB/257.5MB a453f30e82bf Extracting [========================> ] 127.6MB/257.5MB 806be17e856d Extracting [======================================> ] 68.52MB/89.72MB 353af139d39e Extracting [===============================================> ] 233.4MB/246.5MB a453f30e82bf Extracting [=========================> ] 129.8MB/257.5MB a453f30e82bf Extracting [=========================> ] 129.8MB/257.5MB bfcc9123594e Extracting [======> ] 6.816MB/50.57MB 806be17e856d Extracting [=======================================> ] 71.3MB/89.72MB 353af139d39e Extracting [=================================================> ] 241.8MB/246.5MB a453f30e82bf Extracting [=========================> ] 132.6MB/257.5MB a453f30e82bf Extracting [=========================> ] 132.6MB/257.5MB 353af139d39e Extracting [==================================================>] 246.5MB/246.5MB 806be17e856d Extracting [========================================> ] 71.86MB/89.72MB a453f30e82bf Extracting [==========================> ] 138.1MB/257.5MB a453f30e82bf Extracting [==========================> ] 138.1MB/257.5MB fea56ff08967 Extracting [==================================================>] 4.022kB/4.022kB fea56ff08967 Extracting [==================================================>] 4.022kB/4.022kB 806be17e856d Extracting [========================================> ] 72.42MB/89.72MB a453f30e82bf Extracting [==========================> ] 138.7MB/257.5MB a453f30e82bf Extracting [==========================> ] 138.7MB/257.5MB bfcc9123594e Extracting [=======> ] 7.34MB/50.57MB 806be17e856d Extracting [=========================================> ] 74.09MB/89.72MB 353af139d39e Pull complete a453f30e82bf Extracting [===========================> ] 140.4MB/257.5MB a453f30e82bf Extracting [===========================> ] 140.4MB/257.5MB apex-pdp Pulled fea56ff08967 Pull complete 6e62e059c561 Extracting [==================================================>] 1.44kB/1.44kB 6e62e059c561 Extracting [==================================================>] 1.44kB/1.44kB bfcc9123594e Extracting [========> ] 8.389MB/50.57MB 806be17e856d Extracting [==========================================> ] 76.32MB/89.72MB a453f30e82bf Extracting [============================> ] 144.8MB/257.5MB a453f30e82bf Extracting [============================> ] 144.8MB/257.5MB bfcc9123594e Extracting [=========> ] 9.437MB/50.57MB 806be17e856d Extracting [=============================================> ] 80.77MB/89.72MB a453f30e82bf Extracting [============================> ] 148.7MB/257.5MB a453f30e82bf Extracting [============================> ] 148.7MB/257.5MB a453f30e82bf Extracting [=============================> ] 151.5MB/257.5MB a453f30e82bf Extracting [=============================> ] 151.5MB/257.5MB 6e62e059c561 Pull complete 443ffcabdce2 Extracting [===========> ] 32.77kB/137.7kB 443ffcabdce2 Extracting [==================================================>] 137.7kB/137.7kB bfcc9123594e Extracting [==========> ] 11.01MB/50.57MB 443ffcabdce2 Extracting [==================================================>] 137.7kB/137.7kB 806be17e856d Extracting [==============================================> ] 83.56MB/89.72MB 806be17e856d Extracting [===============================================> ] 85.23MB/89.72MB a453f30e82bf Extracting [==============================> ] 156.5MB/257.5MB a453f30e82bf Extracting [==============================> ] 156.5MB/257.5MB bfcc9123594e Extracting [============> ] 12.58MB/50.57MB bfcc9123594e Extracting [===============> ] 15.2MB/50.57MB a453f30e82bf Extracting [==============================> ] 158.8MB/257.5MB a453f30e82bf Extracting [==============================> ] 158.8MB/257.5MB bfcc9123594e Extracting [===================> ] 19.4MB/50.57MB a453f30e82bf Extracting [===============================> ] 159.9MB/257.5MB a453f30e82bf Extracting [===============================> ] 159.9MB/257.5MB 806be17e856d Extracting [================================================> ] 86.9MB/89.72MB bfcc9123594e Extracting [====================> ] 20.45MB/50.57MB a453f30e82bf Extracting [===============================> ] 162.7MB/257.5MB a453f30e82bf Extracting [===============================> ] 162.7MB/257.5MB bfcc9123594e Extracting [=======================> ] 23.59MB/50.57MB 806be17e856d Extracting [=================================================> ] 89.13MB/89.72MB a453f30e82bf Extracting [================================> ] 166MB/257.5MB a453f30e82bf Extracting [================================> ] 166MB/257.5MB bfcc9123594e Extracting [=========================> ] 25.69MB/50.57MB 806be17e856d Extracting [==================================================>] 89.72MB/89.72MB a453f30e82bf Extracting [=================================> ] 170.5MB/257.5MB a453f30e82bf Extracting [=================================> ] 170.5MB/257.5MB bfcc9123594e Extracting [===========================> ] 27.79MB/50.57MB bfcc9123594e Extracting [=============================> ] 29.88MB/50.57MB a453f30e82bf Extracting [=================================> ] 173.8MB/257.5MB a453f30e82bf Extracting [=================================> ] 173.8MB/257.5MB bfcc9123594e Extracting [==============================> ] 30.41MB/50.57MB a453f30e82bf Extracting [=================================> ] 174.4MB/257.5MB a453f30e82bf Extracting [=================================> ] 174.4MB/257.5MB 443ffcabdce2 Pull complete 806be17e856d Pull complete d59855f97034 Extracting [==================================================>] 100B/100B d59855f97034 Extracting [==================================================>] 100B/100B 634de6c90876 Extracting [==================================================>] 3.49kB/3.49kB 634de6c90876 Extracting [==================================================>] 3.49kB/3.49kB bfcc9123594e Extracting [================================> ] 33.03MB/50.57MB a453f30e82bf Extracting [==================================> ] 175.5MB/257.5MB a453f30e82bf Extracting [==================================> ] 175.5MB/257.5MB bfcc9123594e Extracting [===================================> ] 35.65MB/50.57MB d59855f97034 Pull complete 634de6c90876 Pull complete b32c911ea1d7 Extracting [==================================================>] 718B/718B b32c911ea1d7 Extracting [==================================================>] 718B/718B cd00854cfb1a Extracting [==================================================>] 6.971kB/6.971kB cd00854cfb1a Extracting [==================================================>] 6.971kB/6.971kB a453f30e82bf Extracting [==================================> ] 176.6MB/257.5MB a453f30e82bf Extracting [==================================> ] 176.6MB/257.5MB bfcc9123594e Extracting [=====================================> ] 37.75MB/50.57MB cd00854cfb1a Pull complete b32c911ea1d7 Pull complete mariadb Pulled prometheus Pulled a453f30e82bf Extracting [==================================> ] 178.3MB/257.5MB a453f30e82bf Extracting [==================================> ] 178.3MB/257.5MB bfcc9123594e Extracting [========================================> ] 40.89MB/50.57MB a453f30e82bf Extracting [===================================> ] 181.6MB/257.5MB a453f30e82bf Extracting [===================================> ] 181.6MB/257.5MB bfcc9123594e Extracting [===========================================> ] 44.04MB/50.57MB a453f30e82bf Extracting [====================================> ] 185.5MB/257.5MB a453f30e82bf Extracting [====================================> ] 185.5MB/257.5MB bfcc9123594e Extracting [================================================> ] 48.76MB/50.57MB a453f30e82bf Extracting [=====================================> ] 191.1MB/257.5MB a453f30e82bf Extracting [=====================================> ] 191.1MB/257.5MB bfcc9123594e Extracting [=================================================> ] 50.33MB/50.57MB bfcc9123594e Extracting [==================================================>] 50.57MB/50.57MB a453f30e82bf Extracting [=====================================> ] 192.7MB/257.5MB a453f30e82bf Extracting [=====================================> ] 192.7MB/257.5MB bfcc9123594e Pull complete f73d5405641d Extracting [==================================================>] 11.92kB/11.92kB f73d5405641d Extracting [==================================================>] 11.92kB/11.92kB a453f30e82bf Extracting [=====================================> ] 194.4MB/257.5MB a453f30e82bf Extracting [=====================================> ] 194.4MB/257.5MB a453f30e82bf Extracting [======================================> ] 196.1MB/257.5MB a453f30e82bf Extracting [======================================> ] 196.1MB/257.5MB f73d5405641d Pull complete 0c9bbf800250 Extracting [==================================================>] 1.225kB/1.225kB 0c9bbf800250 Extracting [==================================================>] 1.225kB/1.225kB a453f30e82bf Extracting [======================================> ] 197.8MB/257.5MB a453f30e82bf Extracting [======================================> ] 197.8MB/257.5MB 0c9bbf800250 Pull complete grafana Pulled a453f30e82bf Extracting [======================================> ] 200.5MB/257.5MB a453f30e82bf Extracting [======================================> ] 200.5MB/257.5MB a453f30e82bf Extracting [=======================================> ] 202.8MB/257.5MB a453f30e82bf Extracting [=======================================> ] 202.8MB/257.5MB a453f30e82bf Extracting [=======================================> ] 205MB/257.5MB a453f30e82bf Extracting [=======================================> ] 205MB/257.5MB a453f30e82bf Extracting [========================================> ] 208.3MB/257.5MB a453f30e82bf Extracting [========================================> ] 208.3MB/257.5MB a453f30e82bf Extracting [========================================> ] 211.1MB/257.5MB a453f30e82bf Extracting [========================================> ] 211.1MB/257.5MB a453f30e82bf Extracting [=========================================> ] 214.5MB/257.5MB a453f30e82bf Extracting [=========================================> ] 214.5MB/257.5MB a453f30e82bf Extracting [==========================================> ] 218.4MB/257.5MB a453f30e82bf Extracting [==========================================> ] 218.4MB/257.5MB a453f30e82bf Extracting [===========================================> ] 222.3MB/257.5MB a453f30e82bf Extracting [===========================================> ] 222.3MB/257.5MB a453f30e82bf Extracting [===========================================> ] 224.5MB/257.5MB a453f30e82bf Extracting [===========================================> ] 224.5MB/257.5MB a453f30e82bf Extracting [===========================================> ] 226.2MB/257.5MB a453f30e82bf Extracting [===========================================> ] 226.2MB/257.5MB a453f30e82bf Extracting [============================================> ] 228.4MB/257.5MB a453f30e82bf Extracting [============================================> ] 228.4MB/257.5MB a453f30e82bf Extracting [=============================================> ] 233.4MB/257.5MB a453f30e82bf Extracting [=============================================> ] 233.4MB/257.5MB a453f30e82bf Extracting [==============================================> ] 237.3MB/257.5MB a453f30e82bf Extracting [==============================================> ] 237.3MB/257.5MB a453f30e82bf Extracting [===============================================> ] 243.4MB/257.5MB a453f30e82bf Extracting [===============================================> ] 243.4MB/257.5MB a453f30e82bf Extracting [=================================================> ] 253.5MB/257.5MB a453f30e82bf Extracting [=================================================> ] 253.5MB/257.5MB a453f30e82bf Extracting [==================================================>] 257.5MB/257.5MB a453f30e82bf Extracting [==================================================>] 257.5MB/257.5MB a453f30e82bf Pull complete a453f30e82bf Pull complete 016e383f3f47 Extracting [==================================================>] 1.102kB/1.102kB 016e383f3f47 Extracting [==================================================>] 1.102kB/1.102kB 016e383f3f47 Extracting [==================================================>] 1.102kB/1.102kB 016e383f3f47 Extracting [==================================================>] 1.102kB/1.102kB 016e383f3f47 Pull complete 016e383f3f47 Pull complete f7d27dafad0a Extracting [> ] 98.3kB/8.351MB f7d27dafad0a Extracting [> ] 98.3kB/8.351MB f7d27dafad0a Extracting [===================> ] 3.244MB/8.351MB f7d27dafad0a Extracting [===================> ] 3.244MB/8.351MB f7d27dafad0a Extracting [==================================================>] 8.351MB/8.351MB f7d27dafad0a Extracting [==================================================>] 8.351MB/8.351MB f7d27dafad0a Extracting [==================================================>] 8.351MB/8.351MB f7d27dafad0a Extracting [==================================================>] 8.351MB/8.351MB f7d27dafad0a Pull complete f7d27dafad0a Pull complete 56ccc8be1ca0 Extracting [==================================================>] 21.29kB/21.29kB 56ccc8be1ca0 Extracting [==================================================>] 21.29kB/21.29kB 56ccc8be1ca0 Extracting [==================================================>] 21.29kB/21.29kB 56ccc8be1ca0 Extracting [==================================================>] 21.29kB/21.29kB 56ccc8be1ca0 Pull complete 56ccc8be1ca0 Pull complete f77f01ac624c Extracting [> ] 458.8kB/43.2MB f77f01ac624c Extracting [> ] 458.8kB/43.2MB f77f01ac624c Extracting [==============> ] 12.85MB/43.2MB f77f01ac624c Extracting [==============> ] 12.85MB/43.2MB f77f01ac624c Extracting [===================================> ] 30.28MB/43.2MB f77f01ac624c Extracting [===================================> ] 30.28MB/43.2MB f77f01ac624c Extracting [==================================================>] 43.2MB/43.2MB f77f01ac624c Extracting [==================================================>] 43.2MB/43.2MB f77f01ac624c Pull complete f77f01ac624c Pull complete 1c6e35a73ed7 Extracting [==================================================>] 1.105kB/1.105kB 1c6e35a73ed7 Extracting [==================================================>] 1.105kB/1.105kB 1c6e35a73ed7 Extracting [==================================================>] 1.105kB/1.105kB 1c6e35a73ed7 Extracting [==================================================>] 1.105kB/1.105kB 1c6e35a73ed7 Pull complete 1c6e35a73ed7 Pull complete aa5e151b62ff Extracting [==================================================>] 853B/853B aa5e151b62ff Extracting [==================================================>] 853B/853B aa5e151b62ff Extracting [==================================================>] 853B/853B aa5e151b62ff Extracting [==================================================>] 853B/853B aa5e151b62ff Pull complete aa5e151b62ff Pull complete 262d375318c3 Extracting [==================================================>] 98B/98B 262d375318c3 Extracting [==================================================>] 98B/98B 262d375318c3 Extracting [==================================================>] 98B/98B 262d375318c3 Extracting [==================================================>] 98B/98B 262d375318c3 Pull complete 262d375318c3 Pull complete 28a7d18ebda4 Extracting [==================================================>] 173B/173B 28a7d18ebda4 Extracting [==================================================>] 173B/173B 28a7d18ebda4 Extracting [==================================================>] 173B/173B 28a7d18ebda4 Extracting [==================================================>] 173B/173B 28a7d18ebda4 Pull complete 28a7d18ebda4 Pull complete bdc615dfc787 Extracting [=======> ] 32.77kB/230.6kB bdc615dfc787 Extracting [=======> ] 32.77kB/230.6kB bdc615dfc787 Extracting [==================================================>] 230.6kB/230.6kB bdc615dfc787 Extracting [==================================================>] 230.6kB/230.6kB bdc615dfc787 Pull complete bdc615dfc787 Pull complete ab973a5038b6 Extracting [> ] 557.1kB/121.6MB 33966fd36306 Extracting [> ] 557.1kB/121.6MB ab973a5038b6 Extracting [====> ] 10.03MB/121.6MB 33966fd36306 Extracting [====> ] 11.14MB/121.6MB ab973a5038b6 Extracting [==========> ] 25.62MB/121.6MB 33966fd36306 Extracting [=========> ] 22.28MB/121.6MB ab973a5038b6 Extracting [================> ] 40.11MB/121.6MB 33966fd36306 Extracting [=============> ] 32.31MB/121.6MB ab973a5038b6 Extracting [=======================> ] 57.38MB/121.6MB 33966fd36306 Extracting [===================> ] 47.91MB/121.6MB ab973a5038b6 Extracting [==============================> ] 73.53MB/121.6MB 33966fd36306 Extracting [===========================> ] 65.73MB/121.6MB ab973a5038b6 Extracting [====================================> ] 88.57MB/121.6MB 33966fd36306 Extracting [==================================> ] 83MB/121.6MB ab973a5038b6 Extracting [==========================================> ] 104.2MB/121.6MB 33966fd36306 Extracting [========================================> ] 99.16MB/121.6MB ab973a5038b6 Extracting [================================================> ] 117MB/121.6MB 33966fd36306 Extracting [===============================================> ] 114.8MB/121.6MB ab973a5038b6 Extracting [==================================================>] 121.6MB/121.6MB 33966fd36306 Extracting [=================================================> ] 119.8MB/121.6MB ab973a5038b6 Pull complete 5aee3e0528f7 Extracting [==================================================>] 3.445kB/3.445kB 5aee3e0528f7 Extracting [==================================================>] 3.445kB/3.445kB 33966fd36306 Extracting [==================================================>] 121.6MB/121.6MB 33966fd36306 Pull complete 8b4455fb60b9 Extracting [==================================================>] 3.627kB/3.627kB 8b4455fb60b9 Extracting [==================================================>] 3.627kB/3.627kB 5aee3e0528f7 Pull complete zookeeper Pulled 8b4455fb60b9 Pull complete kafka Pulled Network compose_default Creating Network compose_default Created Container mariadb Creating Container simulator Creating Container prometheus Creating Container zookeeper Creating Container simulator Created Container prometheus Created Container grafana Creating Container mariadb Created Container policy-db-migrator Creating Container zookeeper Created Container kafka Creating Container grafana Created Container policy-db-migrator Created Container policy-api Creating Container kafka Created Container policy-api Created Container policy-pap Creating Container policy-pap Created Container policy-apex-pdp Creating Container policy-apex-pdp Created Container mariadb Starting Container prometheus Starting Container simulator Starting Container zookeeper Starting Container mariadb Started Container policy-db-migrator Starting Container policy-db-migrator Started Container policy-api Starting Container policy-api Started Container simulator Started Container zookeeper Started Container kafka Starting Container kafka Started Container policy-pap Starting Container policy-pap Started Container policy-apex-pdp Starting Container policy-apex-pdp Started Container prometheus Started Container grafana Starting Container grafana Started Prometheus server: http://localhost:30259 Grafana server: http://localhost:30269 Waiting for REST to come up on localhost port 30003... NAMES STATUS policy-apex-pdp Up 11 seconds policy-pap Up 12 seconds policy-api Up 15 seconds kafka Up 12 seconds grafana Up 10 seconds simulator Up 14 seconds prometheus Up 10 seconds zookeeper Up 13 seconds mariadb Up 17 seconds NAMES STATUS policy-apex-pdp Up 16 seconds policy-pap Up 17 seconds policy-api Up 20 seconds kafka Up 17 seconds grafana Up 15 seconds simulator Up 20 seconds prometheus Up 15 seconds zookeeper Up 18 seconds mariadb Up 22 seconds NAMES STATUS policy-apex-pdp Up 21 seconds policy-pap Up 22 seconds policy-api Up 25 seconds kafka Up 23 seconds grafana Up 20 seconds simulator Up 25 seconds prometheus Up 20 seconds zookeeper Up 24 seconds mariadb Up 27 seconds NAMES STATUS policy-apex-pdp Up 26 seconds policy-pap Up 27 seconds policy-api Up 30 seconds kafka Up 28 seconds grafana Up 25 seconds simulator Up 30 seconds prometheus Up 26 seconds zookeeper Up 29 seconds mariadb Up 32 seconds NAMES STATUS policy-apex-pdp Up 31 seconds policy-pap Up 32 seconds policy-api Up 35 seconds kafka Up 33 seconds grafana Up 30 seconds simulator Up 35 seconds prometheus Up 31 seconds zookeeper Up 34 seconds mariadb Up 37 seconds NAMES STATUS policy-apex-pdp Up 36 seconds policy-pap Up 37 seconds policy-api Up 40 seconds kafka Up 38 seconds grafana Up 35 seconds simulator Up 40 seconds prometheus Up 36 seconds zookeeper Up 39 seconds mariadb Up 42 seconds Build docker image for robot framework Error: No such image: policy-csit-robot Cloning into '/w/workspace/policy-pap-newdelhi-project-csit-pap/csit/resources/tests/models'... Build robot framework docker image Sending build context to Docker daemon 16.14MB Step 1/9 : FROM nexus3.onap.org:10001/library/python:3.10-slim-bullseye 3.10-slim-bullseye: Pulling from library/python 82aabceedc2f: Pulling fs layer 92566c25ddaa: Pulling fs layer 99e8ea822ad1: Pulling fs layer c74e2b8db7e7: Pulling fs layer b05e7fcaa1f6: Pulling fs layer c74e2b8db7e7: Waiting b05e7fcaa1f6: Waiting 92566c25ddaa: Verifying Checksum 92566c25ddaa: Download complete c74e2b8db7e7: Download complete b05e7fcaa1f6: Verifying Checksum b05e7fcaa1f6: Download complete 99e8ea822ad1: Verifying Checksum 99e8ea822ad1: Download complete 82aabceedc2f: Verifying Checksum 82aabceedc2f: Download complete 82aabceedc2f: Pull complete 92566c25ddaa: Pull complete 99e8ea822ad1: Pull complete c74e2b8db7e7: Pull complete b05e7fcaa1f6: Pull complete Digest: sha256:9a5a570e42ffacf633831b188ff88ec3fab9ac2302fee4cf4c18973689f2f3a9 Status: Downloaded newer image for nexus3.onap.org:10001/library/python:3.10-slim-bullseye ---> 0818d4906b16 Step 2/9 : ARG CSIT_SCRIPT=${CSIT_SCRIPT} ---> Running in f0b87a50a8f8 Removing intermediate container f0b87a50a8f8 ---> a60a704cbb2c Step 3/9 : ARG ROBOT_FILE=${ROBOT_FILE} ---> Running in 1e650af8f746 Removing intermediate container 1e650af8f746 ---> 1914f6358679 Step 4/9 : ENV ROBOT_WORKSPACE=/opt/robotworkspace ROBOT_FILE=$ROBOT_FILE CLAMP_K8S_TEST=$CLAMP_K8S_TEST ---> Running in a8933d8722ea Removing intermediate container a8933d8722ea ---> 5bf14cbc698c Step 5/9 : RUN python3 -m pip -qq install --upgrade pip && python3 -m pip -qq install --upgrade --extra-index-url="https://nexus3.onap.org/repository/PyPi.staging/simple" 'robotframework-onap==0.6.0.*' --pre && python3 -m pip -qq install --upgrade confluent-kafka && python3 -m pip freeze ---> Running in 97414c4287a3 bcrypt==4.2.0 certifi==2024.7.4 cffi==1.17.0 charset-normalizer==3.3.2 confluent-kafka==2.5.0 cryptography==43.0.0 decorator==5.1.1 deepdiff==7.0.1 dnspython==2.6.1 future==1.0.0 idna==3.7 Jinja2==3.1.4 jsonpath-rw==1.4.0 kafka-python==2.0.2 MarkupSafe==2.1.5 more-itertools==5.0.0 ordered-set==4.1.0 paramiko==3.4.1 pbr==6.0.0 ply==3.11 protobuf==5.28.0rc2 pycparser==2.22 PyNaCl==1.5.0 PyYAML==6.0.2 requests==2.32.3 robotframework==7.0.1 robotframework-onap==0.6.0.dev105 robotframework-requests==1.0a11 robotlibcore-temp==1.0.2 six==1.16.0 urllib3==2.2.2 Removing intermediate container 97414c4287a3 ---> 5d227d0433ce Step 6/9 : RUN mkdir -p ${ROBOT_WORKSPACE} ---> Running in e6bf872c1bf7 Removing intermediate container e6bf872c1bf7 ---> 011969ccac72 Step 7/9 : COPY scripts/run-test.sh tests/ ${ROBOT_WORKSPACE}/ ---> 63345d594678 Step 8/9 : WORKDIR ${ROBOT_WORKSPACE} ---> Running in 27153f2e0951 Removing intermediate container 27153f2e0951 ---> 3b1cd1d966af Step 9/9 : CMD ["sh", "-c", "./run-test.sh" ] ---> Running in 793cbbc59c1a Removing intermediate container 793cbbc59c1a ---> e6097cf2f978 Successfully built e6097cf2f978 Successfully tagged policy-csit-robot:latest top - 17:03:03 up 3 min, 0 users, load average: 2.79, 1.53, 0.61 Tasks: 210 total, 1 running, 131 sleeping, 0 stopped, 0 zombie %Cpu(s): 15.4 us, 3.9 sy, 0.0 ni, 75.5 id, 5.1 wa, 0.0 hi, 0.1 si, 0.1 st total used free shared buff/cache available Mem: 31G 2.8G 22G 1.3M 6.1G 28G Swap: 1.0G 0B 1.0G NAMES STATUS policy-apex-pdp Up About a minute policy-pap Up About a minute policy-api Up About a minute kafka Up About a minute grafana Up About a minute simulator Up About a minute prometheus Up About a minute zookeeper Up About a minute mariadb Up About a minute CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 066046403c5f policy-apex-pdp 1.49% 175.4MiB / 31.41GiB 0.55% 26.6kB / 40kB 0B / 0B 50 7239db023d82 policy-pap 2.39% 616.1MiB / 31.41GiB 1.92% 111kB / 132kB 0B / 149MB 64 f7ab422ab243 policy-api 0.14% 500MiB / 31.41GiB 1.55% 989kB / 673kB 0B / 0B 54 473921d9a5e5 kafka 4.00% 382.4MiB / 31.41GiB 1.19% 126kB / 126kB 0B / 532kB 87 533d8a8097fe grafana 3.11% 53.07MiB / 31.41GiB 0.16% 23.6kB / 4.56kB 0B / 26.2MB 19 310c2d4d592d simulator 0.09% 120.8MiB / 31.41GiB 0.38% 1.34kB / 0B 225kB / 0B 77 cf52c9ff8e59 prometheus 0.00% 19.54MiB / 31.41GiB 0.06% 67kB / 2.83kB 0B / 0B 13 b4d4e9c1e705 zookeeper 0.52% 85.55MiB / 31.41GiB 0.27% 55.6kB / 48.8kB 0B / 406kB 62 0640401a4657 mariadb 0.12% 102.9MiB / 31.41GiB 0.32% 969kB / 1.22MB 11MB / 71.7MB 31 Container policy-csit Creating Container policy-csit Created Attaching to policy-csit policy-csit | Invoking the robot tests from: pap-test.robot pap-slas.robot policy-csit | Run Robot test policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates policy-csit | -v POLICY_API_IP:policy-api:6969 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 policy-csit | -v APEX_IP:policy-apex-pdp:6969 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 policy-csit | -v KAFKA_IP:kafka:9092 policy-csit | -v PROMETHEUS_IP:prometheus:9090 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 policy-csit | -v TEMP_FOLDER:/tmp/distribution policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 policy-csit | -v CLAMP_K8S_TEST: policy-csit | Starting Robot test suites ... policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas.Pap-Test policy-csit | ============================================================================== policy-csit | LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | LoadNodeTemplates :: Create node templates in database using speci... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Healthcheck :: Verify policy pap health check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Consolidated Healthcheck :: Verify policy consolidated health check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Metrics :: Verify policy pap is exporting prometheus metrics | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | DeployPdpGroups :: Deploy policies in PdpGroups | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Pap-Test & Pap-Slas.Pap-Test | PASS | policy-csit | 22 tests, 22 passed, 0 failed policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas.Pap-Slas policy-csit | ============================================================================== policy-csit | WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Pap-Test & Pap-Slas.Pap-Slas | PASS | policy-csit | 8 tests, 8 passed, 0 failed policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas | PASS | policy-csit | 30 tests, 30 passed, 0 failed policy-csit | ============================================================================== policy-csit | Output: /tmp/results/output.xml policy-csit | Log: /tmp/results/log.html policy-csit | Report: /tmp/results/report.html policy-csit | RESULT: 0 policy-csit exited with code 0 NAMES STATUS policy-apex-pdp Up 2 minutes policy-pap Up 2 minutes policy-api Up 2 minutes kafka Up 2 minutes grafana Up 2 minutes simulator Up 2 minutes prometheus Up 2 minutes zookeeper Up 2 minutes mariadb Up 2 minutes Shut down started! Collecting logs from docker compose containers... ======== Logs from grafana ======== grafana | logger=settings t=2024-08-13T17:02:00.087333306Z level=info msg="Starting Grafana" version=11.1.3 commit=da5a557b6e1c3b33a5f2a4af73428ef67e949e4d branch=v11.1.x compiled=2024-08-13T17:02:00Z grafana | logger=settings t=2024-08-13T17:02:00.087771971Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini grafana | logger=settings t=2024-08-13T17:02:00.087786911Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini grafana | logger=settings t=2024-08-13T17:02:00.087793821Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" grafana | logger=settings t=2024-08-13T17:02:00.087797841Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" grafana | logger=settings t=2024-08-13T17:02:00.087801482Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" grafana | logger=settings t=2024-08-13T17:02:00.087806432Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" grafana | logger=settings t=2024-08-13T17:02:00.087809242Z level=info msg="Config overridden from command line" arg="default.log.mode=console" grafana | logger=settings t=2024-08-13T17:02:00.087812472Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" grafana | logger=settings t=2024-08-13T17:02:00.087818782Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" grafana | logger=settings t=2024-08-13T17:02:00.087821782Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" grafana | logger=settings t=2024-08-13T17:02:00.087824782Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" grafana | logger=settings t=2024-08-13T17:02:00.087828342Z level=info msg=Target target=[all] grafana | logger=settings t=2024-08-13T17:02:00.087835532Z level=info msg="Path Home" path=/usr/share/grafana grafana | logger=settings t=2024-08-13T17:02:00.087840892Z level=info msg="Path Data" path=/var/lib/grafana grafana | logger=settings t=2024-08-13T17:02:00.087845552Z level=info msg="Path Logs" path=/var/log/grafana grafana | logger=settings t=2024-08-13T17:02:00.087849332Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins grafana | logger=settings t=2024-08-13T17:02:00.087854162Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning grafana | logger=settings t=2024-08-13T17:02:00.087858582Z level=info msg="App mode production" grafana | logger=featuremgmt t=2024-08-13T17:02:00.088272897Z level=info msg=FeatureToggles transformationsRedesign=true prometheusDataplane=true alertingInsights=true lokiQueryHints=true logsExploreTableVisualisation=true alertingSimplifiedRouting=true lokiStructuredMetadata=true managedPluginsInstall=true awsAsyncQueryCaching=true alertingNoDataErrorExecution=true prometheusConfigOverhaulAuth=true ssoSettingsApi=true panelMonitoring=true lokiQuerySplitting=true dataplaneFrontendFallback=true awsDatasourcesNewFormStyling=true kubernetesPlaylists=true recordedQueriesMulti=true annotationPermissionUpdate=true recoveryThreshold=true prometheusMetricEncyclopedia=true correlations=true publicDashboards=true logsContextDatasourceUi=true exploreContentOutline=true prometheusAzureOverrideAudience=true cloudWatchCrossAccountQuerying=true topnav=true exploreMetrics=true logRowsPopoverMenu=true betterPageScrolling=true angularDeprecationUI=true dashgpt=true cloudWatchNewLabelParsing=true logsInfiniteScrolling=true lokiMetricDataplane=true influxdbBackendMigration=true nestedFolders=true grafana | logger=sqlstore t=2024-08-13T17:02:00.088332667Z level=info msg="Connecting to DB" dbtype=sqlite3 grafana | logger=sqlstore t=2024-08-13T17:02:00.088352539Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db grafana | logger=migrator t=2024-08-13T17:02:00.090020577Z level=info msg="Locking database" grafana | logger=migrator t=2024-08-13T17:02:00.090036607Z level=info msg="Starting DB migrations" grafana | logger=migrator t=2024-08-13T17:02:00.090849946Z level=info msg="Executing migration" id="create migration_log table" grafana | logger=migrator t=2024-08-13T17:02:00.091825818Z level=info msg="Migration successfully executed" id="create migration_log table" duration=975.462µs grafana | logger=migrator t=2024-08-13T17:02:00.0991103Z level=info msg="Executing migration" id="create user table" grafana | logger=migrator t=2024-08-13T17:02:00.099819659Z level=info msg="Migration successfully executed" id="create user table" duration=709.009µs grafana | logger=migrator t=2024-08-13T17:02:00.106443133Z level=info msg="Executing migration" id="add unique index user.login" grafana | logger=migrator t=2024-08-13T17:02:00.107700617Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=1.257344ms grafana | logger=migrator t=2024-08-13T17:02:00.111584812Z level=info msg="Executing migration" id="add unique index user.email" grafana | logger=migrator t=2024-08-13T17:02:00.112753215Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=1.168283ms grafana | logger=migrator t=2024-08-13T17:02:00.116424247Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" grafana | logger=migrator t=2024-08-13T17:02:00.11757812Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=1.154513ms grafana | logger=migrator t=2024-08-13T17:02:00.157611253Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" grafana | logger=migrator t=2024-08-13T17:02:00.158471084Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=858.951µs grafana | logger=migrator t=2024-08-13T17:02:00.163605991Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" grafana | logger=migrator t=2024-08-13T17:02:00.16613534Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=2.528659ms grafana | logger=migrator t=2024-08-13T17:02:00.170277087Z level=info msg="Executing migration" id="create user table v2" grafana | logger=migrator t=2024-08-13T17:02:00.171582851Z level=info msg="Migration successfully executed" id="create user table v2" duration=1.306984ms grafana | logger=migrator t=2024-08-13T17:02:00.17668869Z level=info msg="Executing migration" id="create index UQE_user_login - v2" grafana | logger=migrator t=2024-08-13T17:02:00.1776184Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=929.93µs grafana | logger=migrator t=2024-08-13T17:02:00.180903968Z level=info msg="Executing migration" id="create index UQE_user_email - v2" grafana | logger=migrator t=2024-08-13T17:02:00.181893699Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=989.471µs grafana | logger=migrator t=2024-08-13T17:02:00.185407838Z level=info msg="Executing migration" id="copy data_source v1 to v2" grafana | logger=migrator t=2024-08-13T17:02:00.185999055Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=592.047µs grafana | logger=migrator t=2024-08-13T17:02:00.192077285Z level=info msg="Executing migration" id="Drop old table user_v1" grafana | logger=migrator t=2024-08-13T17:02:00.193277798Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=1.200333ms grafana | logger=migrator t=2024-08-13T17:02:00.196603055Z level=info msg="Executing migration" id="Add column help_flags1 to user table" grafana | logger=migrator t=2024-08-13T17:02:00.19789466Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.291395ms grafana | logger=migrator t=2024-08-13T17:02:00.202504702Z level=info msg="Executing migration" id="Update user table charset" grafana | logger=migrator t=2024-08-13T17:02:00.202539072Z level=info msg="Migration successfully executed" id="Update user table charset" duration=42.47µs grafana | logger=migrator t=2024-08-13T17:02:00.207175625Z level=info msg="Executing migration" id="Add last_seen_at column to user" grafana | logger=migrator t=2024-08-13T17:02:00.209559663Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=2.383338ms grafana | logger=migrator t=2024-08-13T17:02:00.216365059Z level=info msg="Executing migration" id="Add missing user data" grafana | logger=migrator t=2024-08-13T17:02:00.216639172Z level=info msg="Migration successfully executed" id="Add missing user data" duration=272.543µs grafana | logger=migrator t=2024-08-13T17:02:00.220328585Z level=info msg="Executing migration" id="Add is_disabled column to user" grafana | logger=migrator t=2024-08-13T17:02:00.222525589Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=2.194204ms grafana | logger=migrator t=2024-08-13T17:02:00.228640939Z level=info msg="Executing migration" id="Add index user.login/user.email" grafana | logger=migrator t=2024-08-13T17:02:00.229438258Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=785.179µs grafana | logger=migrator t=2024-08-13T17:02:00.234597136Z level=info msg="Executing migration" id="Add is_service_account column to user" grafana | logger=migrator t=2024-08-13T17:02:00.23580424Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.206784ms grafana | logger=migrator t=2024-08-13T17:02:00.240424842Z level=info msg="Executing migration" id="Update is_service_account column to nullable" grafana | logger=migrator t=2024-08-13T17:02:00.251568399Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=11.147567ms grafana | logger=migrator t=2024-08-13T17:02:00.254719475Z level=info msg="Executing migration" id="Add uid column to user" grafana | logger=migrator t=2024-08-13T17:02:00.256224381Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=1.504246ms grafana | logger=migrator t=2024-08-13T17:02:00.259215165Z level=info msg="Executing migration" id="Update uid column values for users" grafana | logger=migrator t=2024-08-13T17:02:00.259436228Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=221.163µs grafana | logger=migrator t=2024-08-13T17:02:00.262287061Z level=info msg="Executing migration" id="Add unique index user_uid" grafana | logger=migrator t=2024-08-13T17:02:00.26311435Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=826.909µs grafana | logger=migrator t=2024-08-13T17:02:00.270639775Z level=info msg="Executing migration" id="update login field with orgid to allow for multiple service accounts with same name across orgs" grafana | logger=migrator t=2024-08-13T17:02:00.271137081Z level=info msg="Migration successfully executed" id="update login field with orgid to allow for multiple service accounts with same name across orgs" duration=496.056µs grafana | logger=migrator t=2024-08-13T17:02:00.279988331Z level=info msg="Executing migration" id="update login and email fields to lowercase" grafana | logger=migrator t=2024-08-13T17:02:00.280555777Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase" duration=568.466µs grafana | logger=migrator t=2024-08-13T17:02:00.284115777Z level=info msg="Executing migration" id="update login and email fields to lowercase2" grafana | logger=migrator t=2024-08-13T17:02:00.284584283Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase2" duration=472.236µs grafana | logger=migrator t=2024-08-13T17:02:00.290322848Z level=info msg="Executing migration" id="create temp user table v1-7" grafana | logger=migrator t=2024-08-13T17:02:00.291615643Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=1.292785ms grafana | logger=migrator t=2024-08-13T17:02:00.296072364Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" grafana | logger=migrator t=2024-08-13T17:02:00.297207136Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=1.123972ms grafana | logger=migrator t=2024-08-13T17:02:00.301071919Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" grafana | logger=migrator t=2024-08-13T17:02:00.301783808Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=711.699µs grafana | logger=migrator t=2024-08-13T17:02:00.306624692Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" grafana | logger=migrator t=2024-08-13T17:02:00.307782946Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=1.159854ms grafana | logger=migrator t=2024-08-13T17:02:00.313155507Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" grafana | logger=migrator t=2024-08-13T17:02:00.314425751Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=1.270114ms grafana | logger=migrator t=2024-08-13T17:02:00.31874986Z level=info msg="Executing migration" id="Update temp_user table charset" grafana | logger=migrator t=2024-08-13T17:02:00.318780081Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=31.391µs grafana | logger=migrator t=2024-08-13T17:02:00.321635363Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" grafana | logger=migrator t=2024-08-13T17:02:00.322336331Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=700.538µs grafana | logger=migrator t=2024-08-13T17:02:00.328041156Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" grafana | logger=migrator t=2024-08-13T17:02:00.329004016Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=963.42µs grafana | logger=migrator t=2024-08-13T17:02:00.332213853Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" grafana | logger=migrator t=2024-08-13T17:02:00.333406886Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=1.185293ms grafana | logger=migrator t=2024-08-13T17:02:00.336569642Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" grafana | logger=migrator t=2024-08-13T17:02:00.33728315Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=713.488µs grafana | logger=migrator t=2024-08-13T17:02:00.341559109Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" grafana | logger=migrator t=2024-08-13T17:02:00.346052699Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=4.49163ms grafana | logger=migrator t=2024-08-13T17:02:00.349510089Z level=info msg="Executing migration" id="create temp_user v2" grafana | logger=migrator t=2024-08-13T17:02:00.350899944Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=1.390505ms grafana | logger=migrator t=2024-08-13T17:02:00.355939372Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" grafana | logger=migrator t=2024-08-13T17:02:00.356633609Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=694.047µs grafana | logger=migrator t=2024-08-13T17:02:00.362262623Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" grafana | logger=migrator t=2024-08-13T17:02:00.362983332Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=720.519µs grafana | logger=migrator t=2024-08-13T17:02:00.365584771Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" grafana | logger=migrator t=2024-08-13T17:02:00.36629113Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=706.359µs grafana | logger=migrator t=2024-08-13T17:02:00.36899488Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" grafana | logger=migrator t=2024-08-13T17:02:00.369655337Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=660.097µs grafana | logger=migrator t=2024-08-13T17:02:00.375376302Z level=info msg="Executing migration" id="copy temp_user v1 to v2" grafana | logger=migrator t=2024-08-13T17:02:00.375757676Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=381.684µs grafana | logger=migrator t=2024-08-13T17:02:00.378568368Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" grafana | logger=migrator t=2024-08-13T17:02:00.379083044Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=514.776µs grafana | logger=migrator t=2024-08-13T17:02:00.381526642Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" grafana | logger=migrator t=2024-08-13T17:02:00.381911096Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=384.344µs grafana | logger=migrator t=2024-08-13T17:02:00.384632817Z level=info msg="Executing migration" id="create star table" grafana | logger=migrator t=2024-08-13T17:02:00.385246033Z level=info msg="Migration successfully executed" id="create star table" duration=612.626µs grafana | logger=migrator t=2024-08-13T17:02:00.389757055Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" grafana | logger=migrator t=2024-08-13T17:02:00.390498663Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=741.468µs grafana | logger=migrator t=2024-08-13T17:02:00.395113516Z level=info msg="Executing migration" id="create org table v1" grafana | logger=migrator t=2024-08-13T17:02:00.395871474Z level=info msg="Migration successfully executed" id="create org table v1" duration=758.058µs grafana | logger=migrator t=2024-08-13T17:02:00.404604653Z level=info msg="Executing migration" id="create index UQE_org_name - v1" grafana | logger=migrator t=2024-08-13T17:02:00.405344662Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=746.519µs grafana | logger=migrator t=2024-08-13T17:02:00.411738415Z level=info msg="Executing migration" id="create org_user table v1" grafana | logger=migrator t=2024-08-13T17:02:00.412385022Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=646.648µs grafana | logger=migrator t=2024-08-13T17:02:00.415740329Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" grafana | logger=migrator t=2024-08-13T17:02:00.416493929Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=753.57µs grafana | logger=migrator t=2024-08-13T17:02:00.419521263Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" grafana | logger=migrator t=2024-08-13T17:02:00.420314912Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=792.759µs grafana | logger=migrator t=2024-08-13T17:02:00.423521198Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" grafana | logger=migrator t=2024-08-13T17:02:00.424335207Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=818.369µs grafana | logger=migrator t=2024-08-13T17:02:00.429372354Z level=info msg="Executing migration" id="Update org table charset" grafana | logger=migrator t=2024-08-13T17:02:00.429401255Z level=info msg="Migration successfully executed" id="Update org table charset" duration=30.031µs grafana | logger=migrator t=2024-08-13T17:02:00.432592911Z level=info msg="Executing migration" id="Update org_user table charset" grafana | logger=migrator t=2024-08-13T17:02:00.432619961Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=28.03µs grafana | logger=migrator t=2024-08-13T17:02:00.435083499Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" grafana | logger=migrator t=2024-08-13T17:02:00.435287391Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=203.922µs grafana | logger=migrator t=2024-08-13T17:02:00.441426532Z level=info msg="Executing migration" id="create dashboard table" grafana | logger=migrator t=2024-08-13T17:02:00.44215787Z level=info msg="Migration successfully executed" id="create dashboard table" duration=731.538µs grafana | logger=migrator t=2024-08-13T17:02:00.447771083Z level=info msg="Executing migration" id="add index dashboard.account_id" grafana | logger=migrator t=2024-08-13T17:02:00.448780064Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=1.003391ms grafana | logger=migrator t=2024-08-13T17:02:00.451677787Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" grafana | logger=migrator t=2024-08-13T17:02:00.452549378Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=871.531µs grafana | logger=migrator t=2024-08-13T17:02:00.45540082Z level=info msg="Executing migration" id="create dashboard_tag table" grafana | logger=migrator t=2024-08-13T17:02:00.456043937Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=643.067µs grafana | logger=migrator t=2024-08-13T17:02:00.459276393Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" grafana | logger=migrator t=2024-08-13T17:02:00.460101292Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=817.869µs grafana | logger=migrator t=2024-08-13T17:02:00.465874348Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" grafana | logger=migrator t=2024-08-13T17:02:00.466639168Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=764.97µs grafana | logger=migrator t=2024-08-13T17:02:00.470215298Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" grafana | logger=migrator t=2024-08-13T17:02:00.475155304Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=4.939976ms grafana | logger=migrator t=2024-08-13T17:02:00.478203538Z level=info msg="Executing migration" id="create dashboard v2" grafana | logger=migrator t=2024-08-13T17:02:00.479015577Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=812.419µs grafana | logger=migrator t=2024-08-13T17:02:00.486619764Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" grafana | logger=migrator t=2024-08-13T17:02:00.487948148Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=1.327214ms grafana | logger=migrator t=2024-08-13T17:02:00.493140477Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" grafana | logger=migrator t=2024-08-13T17:02:00.494221199Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=1.079732ms grafana | logger=migrator t=2024-08-13T17:02:00.497088912Z level=info msg="Executing migration" id="copy dashboard v1 to v2" grafana | logger=migrator t=2024-08-13T17:02:00.497439236Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=350.465µs grafana | logger=migrator t=2024-08-13T17:02:00.501356371Z level=info msg="Executing migration" id="drop table dashboard_v1" grafana | logger=migrator t=2024-08-13T17:02:00.5022085Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=846.339µs grafana | logger=migrator t=2024-08-13T17:02:00.505223374Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" grafana | logger=migrator t=2024-08-13T17:02:00.505290075Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=67.641µs grafana | logger=migrator t=2024-08-13T17:02:00.508258078Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" grafana | logger=migrator t=2024-08-13T17:02:00.510033729Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=1.775371ms grafana | logger=migrator t=2024-08-13T17:02:00.513894222Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" grafana | logger=migrator t=2024-08-13T17:02:00.515623212Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.72901ms grafana | logger=migrator t=2024-08-13T17:02:00.518837479Z level=info msg="Executing migration" id="Add column gnetId in dashboard" grafana | logger=migrator t=2024-08-13T17:02:00.520526318Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.68874ms grafana | logger=migrator t=2024-08-13T17:02:00.523721274Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" grafana | logger=migrator t=2024-08-13T17:02:00.524429512Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=707.998µs grafana | logger=migrator t=2024-08-13T17:02:00.530343829Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" grafana | logger=migrator t=2024-08-13T17:02:00.532119589Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.78284ms grafana | logger=migrator t=2024-08-13T17:02:00.569041228Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" grafana | logger=migrator t=2024-08-13T17:02:00.570508255Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=1.467577ms grafana | logger=migrator t=2024-08-13T17:02:00.576180989Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" grafana | logger=migrator t=2024-08-13T17:02:00.577425953Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=1.250744ms grafana | logger=migrator t=2024-08-13T17:02:00.581052984Z level=info msg="Executing migration" id="Update dashboard table charset" grafana | logger=migrator t=2024-08-13T17:02:00.581094114Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=42.91µs grafana | logger=migrator t=2024-08-13T17:02:00.585438104Z level=info msg="Executing migration" id="Update dashboard_tag table charset" grafana | logger=migrator t=2024-08-13T17:02:00.585466734Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=29.73µs grafana | logger=migrator t=2024-08-13T17:02:00.588882843Z level=info msg="Executing migration" id="Add column folder_id in dashboard" grafana | logger=migrator t=2024-08-13T17:02:00.591045317Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=2.159404ms grafana | logger=migrator t=2024-08-13T17:02:00.596512709Z level=info msg="Executing migration" id="Add column isFolder in dashboard" grafana | logger=migrator t=2024-08-13T17:02:00.598586623Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=2.072804ms grafana | logger=migrator t=2024-08-13T17:02:00.602114323Z level=info msg="Executing migration" id="Add column has_acl in dashboard" grafana | logger=migrator t=2024-08-13T17:02:00.603515538Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=1.396135ms grafana | logger=migrator t=2024-08-13T17:02:00.610186105Z level=info msg="Executing migration" id="Add column uid in dashboard" grafana | logger=migrator t=2024-08-13T17:02:00.611667391Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=1.481126ms grafana | logger=migrator t=2024-08-13T17:02:00.616188273Z level=info msg="Executing migration" id="Update uid column values in dashboard" grafana | logger=migrator t=2024-08-13T17:02:00.616377255Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=188.992µs grafana | logger=migrator t=2024-08-13T17:02:00.618867813Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" grafana | logger=migrator t=2024-08-13T17:02:00.61946898Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=601.197µs grafana | logger=migrator t=2024-08-13T17:02:00.622597805Z level=info msg="Executing migration" id="Remove unique index org_id_slug" grafana | logger=migrator t=2024-08-13T17:02:00.623165881Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=568.416µs grafana | logger=migrator t=2024-08-13T17:02:00.626784223Z level=info msg="Executing migration" id="Update dashboard title length" grafana | logger=migrator t=2024-08-13T17:02:00.626808393Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=24.8µs grafana | logger=migrator t=2024-08-13T17:02:00.629723236Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" grafana | logger=migrator t=2024-08-13T17:02:00.630322193Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=598.737µs grafana | logger=migrator t=2024-08-13T17:02:00.633590999Z level=info msg="Executing migration" id="create dashboard_provisioning" grafana | logger=migrator t=2024-08-13T17:02:00.634100815Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=509.726µs grafana | logger=migrator t=2024-08-13T17:02:00.636823426Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" grafana | logger=migrator t=2024-08-13T17:02:00.640845782Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=4.021906ms grafana | logger=migrator t=2024-08-13T17:02:00.644648865Z level=info msg="Executing migration" id="create dashboard_provisioning v2" grafana | logger=migrator t=2024-08-13T17:02:00.645160681Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=511.976µs grafana | logger=migrator t=2024-08-13T17:02:00.648352257Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" grafana | logger=migrator t=2024-08-13T17:02:00.649056446Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=692.158µs grafana | logger=migrator t=2024-08-13T17:02:00.654085332Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" grafana | logger=migrator t=2024-08-13T17:02:00.65474197Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=655.938µs grafana | logger=migrator t=2024-08-13T17:02:00.659910619Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" grafana | logger=migrator t=2024-08-13T17:02:00.660191642Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=281.073µs grafana | logger=migrator t=2024-08-13T17:02:00.662999943Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" grafana | logger=migrator t=2024-08-13T17:02:00.663433458Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=433.295µs grafana | logger=migrator t=2024-08-13T17:02:00.666490243Z level=info msg="Executing migration" id="Add check_sum column" grafana | logger=migrator t=2024-08-13T17:02:00.667922918Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=1.432395ms grafana | logger=migrator t=2024-08-13T17:02:00.6724456Z level=info msg="Executing migration" id="Add index for dashboard_title" grafana | logger=migrator t=2024-08-13T17:02:00.673093807Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=647.667µs grafana | logger=migrator t=2024-08-13T17:02:00.676067111Z level=info msg="Executing migration" id="delete tags for deleted dashboards" grafana | logger=migrator t=2024-08-13T17:02:00.676216662Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=150.311µs grafana | logger=migrator t=2024-08-13T17:02:00.680082637Z level=info msg="Executing migration" id="delete stars for deleted dashboards" grafana | logger=migrator t=2024-08-13T17:02:00.680225219Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=142.882µs grafana | logger=migrator t=2024-08-13T17:02:00.68387788Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" grafana | logger=migrator t=2024-08-13T17:02:00.684481857Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=603.587µs grafana | logger=migrator t=2024-08-13T17:02:00.687490721Z level=info msg="Executing migration" id="Add isPublic for dashboard" grafana | logger=migrator t=2024-08-13T17:02:00.688977318Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=1.486367ms grafana | logger=migrator t=2024-08-13T17:02:00.694782243Z level=info msg="Executing migration" id="Add deleted for dashboard" grafana | logger=migrator t=2024-08-13T17:02:00.696411242Z level=info msg="Migration successfully executed" id="Add deleted for dashboard" duration=1.628689ms grafana | logger=migrator t=2024-08-13T17:02:00.703511622Z level=info msg="Executing migration" id="Add index for deleted" grafana | logger=migrator t=2024-08-13T17:02:00.705340973Z level=info msg="Migration successfully executed" id="Add index for deleted" duration=1.835421ms grafana | logger=migrator t=2024-08-13T17:02:00.708795482Z level=info msg="Executing migration" id="create data_source table" grafana | logger=migrator t=2024-08-13T17:02:00.709872504Z level=info msg="Migration successfully executed" id="create data_source table" duration=1.076682ms grafana | logger=migrator t=2024-08-13T17:02:00.713251953Z level=info msg="Executing migration" id="add index data_source.account_id" grafana | logger=migrator t=2024-08-13T17:02:00.714200293Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=948.62µs grafana | logger=migrator t=2024-08-13T17:02:00.718690885Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" grafana | logger=migrator t=2024-08-13T17:02:00.719629455Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=933.7µs grafana | logger=migrator t=2024-08-13T17:02:00.723539229Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" grafana | logger=migrator t=2024-08-13T17:02:00.724342379Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=803.59µs grafana | logger=migrator t=2024-08-13T17:02:00.72710775Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" grafana | logger=migrator t=2024-08-13T17:02:00.7280036Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=896.08µs grafana | logger=migrator t=2024-08-13T17:02:00.732772045Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" grafana | logger=migrator t=2024-08-13T17:02:00.739346879Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=6.571984ms grafana | logger=migrator t=2024-08-13T17:02:00.745192025Z level=info msg="Executing migration" id="create data_source table v2" grafana | logger=migrator t=2024-08-13T17:02:00.746150265Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=958.19µs grafana | logger=migrator t=2024-08-13T17:02:00.752418437Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" grafana | logger=migrator t=2024-08-13T17:02:00.753772772Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=1.354105ms grafana | logger=migrator t=2024-08-13T17:02:00.75713727Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" grafana | logger=migrator t=2024-08-13T17:02:00.758501206Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=1.364246ms grafana | logger=migrator t=2024-08-13T17:02:00.762030396Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" grafana | logger=migrator t=2024-08-13T17:02:00.762710363Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=673.947µs grafana | logger=migrator t=2024-08-13T17:02:00.767315686Z level=info msg="Executing migration" id="Add column with_credentials" grafana | logger=migrator t=2024-08-13T17:02:00.769955416Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=2.63856ms grafana | logger=migrator t=2024-08-13T17:02:00.773030161Z level=info msg="Executing migration" id="Add secure json data column" grafana | logger=migrator t=2024-08-13T17:02:00.775445508Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.414817ms grafana | logger=migrator t=2024-08-13T17:02:00.778552823Z level=info msg="Executing migration" id="Update data_source table charset" grafana | logger=migrator t=2024-08-13T17:02:00.778586094Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=34.521µs grafana | logger=migrator t=2024-08-13T17:02:00.78356223Z level=info msg="Executing migration" id="Update initial version to 1" grafana | logger=migrator t=2024-08-13T17:02:00.783890774Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=328.134µs grafana | logger=migrator t=2024-08-13T17:02:00.789356446Z level=info msg="Executing migration" id="Add read_only data column" grafana | logger=migrator t=2024-08-13T17:02:00.793184589Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=3.827443ms grafana | logger=migrator t=2024-08-13T17:02:00.796422576Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" grafana | logger=migrator t=2024-08-13T17:02:00.796765849Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=349.253µs grafana | logger=migrator t=2024-08-13T17:02:00.799077026Z level=info msg="Executing migration" id="Update json_data with nulls" grafana | logger=migrator t=2024-08-13T17:02:00.799389089Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=311.763µs grafana | logger=migrator t=2024-08-13T17:02:00.805210035Z level=info msg="Executing migration" id="Add uid column" grafana | logger=migrator t=2024-08-13T17:02:00.809270352Z level=info msg="Migration successfully executed" id="Add uid column" duration=4.059716ms grafana | logger=migrator t=2024-08-13T17:02:00.81438815Z level=info msg="Executing migration" id="Update uid value" grafana | logger=migrator t=2024-08-13T17:02:00.814731883Z level=info msg="Migration successfully executed" id="Update uid value" duration=343.503µs grafana | logger=migrator t=2024-08-13T17:02:00.817877029Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" grafana | logger=migrator t=2024-08-13T17:02:00.818785259Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=908.37µs grafana | logger=migrator t=2024-08-13T17:02:00.823695295Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" grafana | logger=migrator t=2024-08-13T17:02:00.824604475Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=909.15µs grafana | logger=migrator t=2024-08-13T17:02:00.832784118Z level=info msg="Executing migration" id="Add is_prunable column" grafana | logger=migrator t=2024-08-13T17:02:00.836747653Z level=info msg="Migration successfully executed" id="Add is_prunable column" duration=3.962895ms grafana | logger=migrator t=2024-08-13T17:02:00.840237333Z level=info msg="Executing migration" id="Add api_version column" grafana | logger=migrator t=2024-08-13T17:02:00.842833932Z level=info msg="Migration successfully executed" id="Add api_version column" duration=2.596059ms grafana | logger=migrator t=2024-08-13T17:02:00.845952847Z level=info msg="Executing migration" id="create api_key table" grafana | logger=migrator t=2024-08-13T17:02:00.846866837Z level=info msg="Migration successfully executed" id="create api_key table" duration=913.7µs grafana | logger=migrator t=2024-08-13T17:02:00.851602831Z level=info msg="Executing migration" id="add index api_key.account_id" grafana | logger=migrator t=2024-08-13T17:02:00.852504221Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=901.31µs grafana | logger=migrator t=2024-08-13T17:02:00.856540387Z level=info msg="Executing migration" id="add index api_key.key" grafana | logger=migrator t=2024-08-13T17:02:00.857961543Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=1.420106ms grafana | logger=migrator t=2024-08-13T17:02:00.861393712Z level=info msg="Executing migration" id="add index api_key.account_id_name" grafana | logger=migrator t=2024-08-13T17:02:00.862824999Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=1.431097ms grafana | logger=migrator t=2024-08-13T17:02:00.870498866Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" grafana | logger=migrator t=2024-08-13T17:02:00.871409846Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=911.24µs grafana | logger=migrator t=2024-08-13T17:02:00.879370756Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" grafana | logger=migrator t=2024-08-13T17:02:00.880363737Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=992.611µs grafana | logger=migrator t=2024-08-13T17:02:00.884602686Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" grafana | logger=migrator t=2024-08-13T17:02:00.885976491Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=1.374105ms grafana | logger=migrator t=2024-08-13T17:02:00.889593302Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" grafana | logger=migrator t=2024-08-13T17:02:00.89816239Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=8.569968ms grafana | logger=migrator t=2024-08-13T17:02:00.902467918Z level=info msg="Executing migration" id="create api_key table v2" grafana | logger=migrator t=2024-08-13T17:02:00.903281677Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=813.329µs grafana | logger=migrator t=2024-08-13T17:02:00.906697816Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" grafana | logger=migrator t=2024-08-13T17:02:00.907540806Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=843.04µs grafana | logger=migrator t=2024-08-13T17:02:00.91325521Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" grafana | logger=migrator t=2024-08-13T17:02:00.91414646Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=891.36µs grafana | logger=migrator t=2024-08-13T17:02:00.954797881Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" grafana | logger=migrator t=2024-08-13T17:02:00.956256417Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=1.458036ms grafana | logger=migrator t=2024-08-13T17:02:00.959742288Z level=info msg="Executing migration" id="copy api_key v1 to v2" grafana | logger=migrator t=2024-08-13T17:02:00.960537286Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=785.768µs grafana | logger=migrator t=2024-08-13T17:02:00.963908005Z level=info msg="Executing migration" id="Drop old table api_key_v1" grafana | logger=migrator t=2024-08-13T17:02:00.964572122Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=666.858µs grafana | logger=migrator t=2024-08-13T17:02:00.968649599Z level=info msg="Executing migration" id="Update api_key table charset" grafana | logger=migrator t=2024-08-13T17:02:00.968673909Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=25.34µs grafana | logger=migrator t=2024-08-13T17:02:00.974123681Z level=info msg="Executing migration" id="Add expires to api_key table" grafana | logger=migrator t=2024-08-13T17:02:00.978832194Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=4.708812ms grafana | logger=migrator t=2024-08-13T17:02:00.983170853Z level=info msg="Executing migration" id="Add service account foreign key" grafana | logger=migrator t=2024-08-13T17:02:00.985868884Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.695391ms grafana | logger=migrator t=2024-08-13T17:02:00.989912769Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" grafana | logger=migrator t=2024-08-13T17:02:00.990190672Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=277.883µs grafana | logger=migrator t=2024-08-13T17:02:00.994310249Z level=info msg="Executing migration" id="Add last_used_at to api_key table" grafana | logger=migrator t=2024-08-13T17:02:00.996971059Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=2.66003ms grafana | logger=migrator t=2024-08-13T17:02:01.000300957Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" grafana | logger=migrator t=2024-08-13T17:02:01.003075748Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.774091ms grafana | logger=migrator t=2024-08-13T17:02:01.007404742Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" grafana | logger=migrator t=2024-08-13T17:02:01.008290161Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=884.999µs grafana | logger=migrator t=2024-08-13T17:02:01.01267739Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" grafana | logger=migrator t=2024-08-13T17:02:01.013429897Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=743.517µs grafana | logger=migrator t=2024-08-13T17:02:01.018864896Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" grafana | logger=migrator t=2024-08-13T17:02:01.019663063Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=791.917µs grafana | logger=migrator t=2024-08-13T17:02:01.024638627Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" grafana | logger=migrator t=2024-08-13T17:02:01.025285233Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=646.526µs grafana | logger=migrator t=2024-08-13T17:02:01.02829532Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" grafana | logger=migrator t=2024-08-13T17:02:01.028928095Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=632.445µs grafana | logger=migrator t=2024-08-13T17:02:01.032156044Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" grafana | logger=migrator t=2024-08-13T17:02:01.03274474Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=588.216µs grafana | logger=migrator t=2024-08-13T17:02:01.038697133Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" grafana | logger=migrator t=2024-08-13T17:02:01.038755013Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=101.92µs grafana | logger=migrator t=2024-08-13T17:02:01.042552287Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" grafana | logger=migrator t=2024-08-13T17:02:01.042576117Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=24.54µs grafana | logger=migrator t=2024-08-13T17:02:01.045831516Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" grafana | logger=migrator t=2024-08-13T17:02:01.047864565Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=2.032729ms grafana | logger=migrator t=2024-08-13T17:02:01.053132462Z level=info msg="Executing migration" id="Add encrypted dashboard json column" grafana | logger=migrator t=2024-08-13T17:02:01.055058579Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=1.925787ms grafana | logger=migrator t=2024-08-13T17:02:01.062070832Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" grafana | logger=migrator t=2024-08-13T17:02:01.062154652Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=84.13µs grafana | logger=migrator t=2024-08-13T17:02:01.065514972Z level=info msg="Executing migration" id="create quota table v1" grafana | logger=migrator t=2024-08-13T17:02:01.066165759Z level=info msg="Migration successfully executed" id="create quota table v1" duration=648.917µs grafana | logger=migrator t=2024-08-13T17:02:01.069236316Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" grafana | logger=migrator t=2024-08-13T17:02:01.069834301Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=596.865µs grafana | logger=migrator t=2024-08-13T17:02:01.073743116Z level=info msg="Executing migration" id="Update quota table charset" grafana | logger=migrator t=2024-08-13T17:02:01.073765016Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=22.49µs grafana | logger=migrator t=2024-08-13T17:02:01.076736092Z level=info msg="Executing migration" id="create plugin_setting table" grafana | logger=migrator t=2024-08-13T17:02:01.077310368Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=574.396µs grafana | logger=migrator t=2024-08-13T17:02:01.080526217Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" grafana | logger=migrator t=2024-08-13T17:02:01.081143282Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=615.955µs grafana | logger=migrator t=2024-08-13T17:02:01.086242018Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" grafana | logger=migrator t=2024-08-13T17:02:01.088363366Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=2.115168ms grafana | logger=migrator t=2024-08-13T17:02:01.091652226Z level=info msg="Executing migration" id="Update plugin_setting table charset" grafana | logger=migrator t=2024-08-13T17:02:01.091682957Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=31.311µs grafana | logger=migrator t=2024-08-13T17:02:01.094793334Z level=info msg="Executing migration" id="create session table" grafana | logger=migrator t=2024-08-13T17:02:01.095611261Z level=info msg="Migration successfully executed" id="create session table" duration=817.447µs grafana | logger=migrator t=2024-08-13T17:02:01.102999498Z level=info msg="Executing migration" id="Drop old table playlist table" grafana | logger=migrator t=2024-08-13T17:02:01.10320732Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=218.712µs grafana | logger=migrator t=2024-08-13T17:02:01.111692035Z level=info msg="Executing migration" id="Drop old table playlist_item table" grafana | logger=migrator t=2024-08-13T17:02:01.111872787Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=181.642µs grafana | logger=migrator t=2024-08-13T17:02:01.119941269Z level=info msg="Executing migration" id="create playlist table v2" grafana | logger=migrator t=2024-08-13T17:02:01.121008918Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=1.086979ms grafana | logger=migrator t=2024-08-13T17:02:01.127243344Z level=info msg="Executing migration" id="create playlist item table v2" grafana | logger=migrator t=2024-08-13T17:02:01.12777498Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=531.866µs grafana | logger=migrator t=2024-08-13T17:02:01.131010788Z level=info msg="Executing migration" id="Update playlist table charset" grafana | logger=migrator t=2024-08-13T17:02:01.131033609Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=29.781µs grafana | logger=migrator t=2024-08-13T17:02:01.135242836Z level=info msg="Executing migration" id="Update playlist_item table charset" grafana | logger=migrator t=2024-08-13T17:02:01.135331926Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=90.4µs grafana | logger=migrator t=2024-08-13T17:02:01.139259842Z level=info msg="Executing migration" id="Add playlist column created_at" grafana | logger=migrator t=2024-08-13T17:02:01.144298866Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=5.037474ms grafana | logger=migrator t=2024-08-13T17:02:01.148552575Z level=info msg="Executing migration" id="Add playlist column updated_at" grafana | logger=migrator t=2024-08-13T17:02:01.151648212Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=3.095197ms grafana | logger=migrator t=2024-08-13T17:02:01.155489076Z level=info msg="Executing migration" id="drop preferences table v2" grafana | logger=migrator t=2024-08-13T17:02:01.155603407Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=114.301µs grafana | logger=migrator t=2024-08-13T17:02:01.158731446Z level=info msg="Executing migration" id="drop preferences table v3" grafana | logger=migrator t=2024-08-13T17:02:01.158837127Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=105.541µs grafana | logger=migrator t=2024-08-13T17:02:01.162754092Z level=info msg="Executing migration" id="create preferences table v3" grafana | logger=migrator t=2024-08-13T17:02:01.163584049Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=829.757µs grafana | logger=migrator t=2024-08-13T17:02:01.167907848Z level=info msg="Executing migration" id="Update preferences table charset" grafana | logger=migrator t=2024-08-13T17:02:01.167951778Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=45.08µs grafana | logger=migrator t=2024-08-13T17:02:01.17150593Z level=info msg="Executing migration" id="Add column team_id in preferences" grafana | logger=migrator t=2024-08-13T17:02:01.17590823Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=4.40341ms grafana | logger=migrator t=2024-08-13T17:02:01.180015656Z level=info msg="Executing migration" id="Update team_id column values in preferences" grafana | logger=migrator t=2024-08-13T17:02:01.180173977Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=158.571µs grafana | logger=migrator t=2024-08-13T17:02:01.1826518Z level=info msg="Executing migration" id="Add column week_start in preferences" grafana | logger=migrator t=2024-08-13T17:02:01.185760797Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=3.108617ms grafana | logger=migrator t=2024-08-13T17:02:01.19385556Z level=info msg="Executing migration" id="Add column preferences.json_data" grafana | logger=migrator t=2024-08-13T17:02:01.198649183Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=4.791693ms grafana | logger=migrator t=2024-08-13T17:02:01.202259835Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" grafana | logger=migrator t=2024-08-13T17:02:01.202368006Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=109.031µs grafana | logger=migrator t=2024-08-13T17:02:01.207621033Z level=info msg="Executing migration" id="Add preferences index org_id" grafana | logger=migrator t=2024-08-13T17:02:01.208517631Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=896.078µs grafana | logger=migrator t=2024-08-13T17:02:01.211629718Z level=info msg="Executing migration" id="Add preferences index user_id" grafana | logger=migrator t=2024-08-13T17:02:01.213028491Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=1.396913ms grafana | logger=migrator t=2024-08-13T17:02:01.216454882Z level=info msg="Executing migration" id="create alert table v1" grafana | logger=migrator t=2024-08-13T17:02:01.218355779Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.899407ms grafana | logger=migrator t=2024-08-13T17:02:01.223775368Z level=info msg="Executing migration" id="add index alert org_id & id " grafana | logger=migrator t=2024-08-13T17:02:01.224832387Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=1.05735ms grafana | logger=migrator t=2024-08-13T17:02:01.230055483Z level=info msg="Executing migration" id="add index alert state" grafana | logger=migrator t=2024-08-13T17:02:01.231082172Z level=info msg="Migration successfully executed" id="add index alert state" duration=1.025439ms grafana | logger=migrator t=2024-08-13T17:02:01.235183549Z level=info msg="Executing migration" id="add index alert dashboard_id" grafana | logger=migrator t=2024-08-13T17:02:01.23642229Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=1.237951ms grafana | logger=migrator t=2024-08-13T17:02:01.241526656Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" grafana | logger=migrator t=2024-08-13T17:02:01.242409453Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=882.717µs grafana | logger=migrator t=2024-08-13T17:02:01.270405134Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" grafana | logger=migrator t=2024-08-13T17:02:01.271854547Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=1.449613ms grafana | logger=migrator t=2024-08-13T17:02:01.277130914Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" grafana | logger=migrator t=2024-08-13T17:02:01.278583807Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=1.458793ms grafana | logger=migrator t=2024-08-13T17:02:01.281354212Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" grafana | logger=migrator t=2024-08-13T17:02:01.290812607Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=9.457955ms grafana | logger=migrator t=2024-08-13T17:02:01.295523538Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" grafana | logger=migrator t=2024-08-13T17:02:01.296643759Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=1.116941ms grafana | logger=migrator t=2024-08-13T17:02:01.300833076Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" grafana | logger=migrator t=2024-08-13T17:02:01.302195379Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=1.361913ms grafana | logger=migrator t=2024-08-13T17:02:01.307665628Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" grafana | logger=migrator t=2024-08-13T17:02:01.30793982Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=274.252µs grafana | logger=migrator t=2024-08-13T17:02:01.311622783Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" grafana | logger=migrator t=2024-08-13T17:02:01.31246883Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=846.018µs grafana | logger=migrator t=2024-08-13T17:02:01.316826169Z level=info msg="Executing migration" id="create alert_notification table v1" grafana | logger=migrator t=2024-08-13T17:02:01.321545172Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=4.710153ms grafana | logger=migrator t=2024-08-13T17:02:01.327112581Z level=info msg="Executing migration" id="Add column is_default" grafana | logger=migrator t=2024-08-13T17:02:01.331212968Z level=info msg="Migration successfully executed" id="Add column is_default" duration=4.098987ms grafana | logger=migrator t=2024-08-13T17:02:01.374986249Z level=info msg="Executing migration" id="Add column frequency" grafana | logger=migrator t=2024-08-13T17:02:01.380750591Z level=info msg="Migration successfully executed" id="Add column frequency" duration=5.763072ms grafana | logger=migrator t=2024-08-13T17:02:01.384187951Z level=info msg="Executing migration" id="Add column send_reminder" grafana | logger=migrator t=2024-08-13T17:02:01.388078745Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.890714ms grafana | logger=migrator t=2024-08-13T17:02:01.392025501Z level=info msg="Executing migration" id="Add column disable_resolve_message" grafana | logger=migrator t=2024-08-13T17:02:01.399084634Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=7.062553ms grafana | logger=migrator t=2024-08-13T17:02:01.40304187Z level=info msg="Executing migration" id="add index alert_notification org_id & name" grafana | logger=migrator t=2024-08-13T17:02:01.403699695Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=650.385µs grafana | logger=migrator t=2024-08-13T17:02:01.406851063Z level=info msg="Executing migration" id="Update alert table charset" grafana | logger=migrator t=2024-08-13T17:02:01.406873184Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=23.091µs grafana | logger=migrator t=2024-08-13T17:02:01.409656988Z level=info msg="Executing migration" id="Update alert_notification table charset" grafana | logger=migrator t=2024-08-13T17:02:01.409695578Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=39.99µs grafana | logger=migrator t=2024-08-13T17:02:01.412929817Z level=info msg="Executing migration" id="create notification_journal table v1" grafana | logger=migrator t=2024-08-13T17:02:01.414164619Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=1.233072ms grafana | logger=migrator t=2024-08-13T17:02:01.420554296Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" grafana | logger=migrator t=2024-08-13T17:02:01.421406403Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=852.098µs grafana | logger=migrator t=2024-08-13T17:02:01.424328099Z level=info msg="Executing migration" id="drop alert_notification_journal" grafana | logger=migrator t=2024-08-13T17:02:01.425061886Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=733.497µs grafana | logger=migrator t=2024-08-13T17:02:01.427859201Z level=info msg="Executing migration" id="create alert_notification_state table v1" grafana | logger=migrator t=2024-08-13T17:02:01.428608208Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=748.957µs grafana | logger=migrator t=2024-08-13T17:02:01.433454321Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" grafana | logger=migrator t=2024-08-13T17:02:01.434391499Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=936.488µs grafana | logger=migrator t=2024-08-13T17:02:01.441591264Z level=info msg="Executing migration" id="Add for to alert table" grafana | logger=migrator t=2024-08-13T17:02:01.445584529Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=3.992985ms grafana | logger=migrator t=2024-08-13T17:02:01.449340002Z level=info msg="Executing migration" id="Add column uid in alert_notification" grafana | logger=migrator t=2024-08-13T17:02:01.453051656Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=3.711054ms grafana | logger=migrator t=2024-08-13T17:02:01.456514487Z level=info msg="Executing migration" id="Update uid column values in alert_notification" grafana | logger=migrator t=2024-08-13T17:02:01.456694088Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=179.831µs grafana | logger=migrator t=2024-08-13T17:02:01.459196061Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" grafana | logger=migrator t=2024-08-13T17:02:01.460209209Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=1.012908ms grafana | logger=migrator t=2024-08-13T17:02:01.463029895Z level=info msg="Executing migration" id="Remove unique index org_id_name" grafana | logger=migrator t=2024-08-13T17:02:01.463857593Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=817.187µs grafana | logger=migrator t=2024-08-13T17:02:01.467688036Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" grafana | logger=migrator t=2024-08-13T17:02:01.471824993Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=4.136487ms grafana | logger=migrator t=2024-08-13T17:02:01.478873986Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" grafana | logger=migrator t=2024-08-13T17:02:01.478939817Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=66.851µs grafana | logger=migrator t=2024-08-13T17:02:01.481922653Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" grafana | logger=migrator t=2024-08-13T17:02:01.482781891Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=859.468µs grafana | logger=migrator t=2024-08-13T17:02:01.486370663Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" grafana | logger=migrator t=2024-08-13T17:02:01.487206601Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=835.868µs grafana | logger=migrator t=2024-08-13T17:02:01.491242456Z level=info msg="Executing migration" id="Drop old annotation table v4" grafana | logger=migrator t=2024-08-13T17:02:01.491326657Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=84.411µs grafana | logger=migrator t=2024-08-13T17:02:01.494206922Z level=info msg="Executing migration" id="create annotation table v5" grafana | logger=migrator t=2024-08-13T17:02:01.495170302Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=962.519µs grafana | logger=migrator t=2024-08-13T17:02:01.498979275Z level=info msg="Executing migration" id="add index annotation 0 v3" grafana | logger=migrator t=2024-08-13T17:02:01.499854383Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=875.088µs grafana | logger=migrator t=2024-08-13T17:02:01.502550307Z level=info msg="Executing migration" id="add index annotation 1 v3" grafana | logger=migrator t=2024-08-13T17:02:01.503433035Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=882.418µs grafana | logger=migrator t=2024-08-13T17:02:01.506261081Z level=info msg="Executing migration" id="add index annotation 2 v3" grafana | logger=migrator t=2024-08-13T17:02:01.507260809Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=992.408µs grafana | logger=migrator t=2024-08-13T17:02:01.513434314Z level=info msg="Executing migration" id="add index annotation 3 v3" grafana | logger=migrator t=2024-08-13T17:02:01.514429624Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.00128ms grafana | logger=migrator t=2024-08-13T17:02:01.518182157Z level=info msg="Executing migration" id="add index annotation 4 v3" grafana | logger=migrator t=2024-08-13T17:02:01.519345297Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.16333ms grafana | logger=migrator t=2024-08-13T17:02:01.522149212Z level=info msg="Executing migration" id="Update annotation table charset" grafana | logger=migrator t=2024-08-13T17:02:01.522177202Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=28.91µs grafana | logger=migrator t=2024-08-13T17:02:01.525693374Z level=info msg="Executing migration" id="Add column region_id to annotation table" grafana | logger=migrator t=2024-08-13T17:02:01.52979579Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=4.101516ms grafana | logger=migrator t=2024-08-13T17:02:01.532724116Z level=info msg="Executing migration" id="Drop category_id index" grafana | logger=migrator t=2024-08-13T17:02:01.533585144Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=860.958µs grafana | logger=migrator t=2024-08-13T17:02:01.536169927Z level=info msg="Executing migration" id="Add column tags to annotation table" grafana | logger=migrator t=2024-08-13T17:02:01.540182543Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=4.012036ms grafana | logger=migrator t=2024-08-13T17:02:01.544047287Z level=info msg="Executing migration" id="Create annotation_tag table v2" grafana | logger=migrator t=2024-08-13T17:02:01.544741833Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=695.156µs grafana | logger=migrator t=2024-08-13T17:02:01.548140314Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" grafana | logger=migrator t=2024-08-13T17:02:01.549448735Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=1.308091ms grafana | logger=migrator t=2024-08-13T17:02:01.555256317Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" grafana | logger=migrator t=2024-08-13T17:02:01.556102905Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=846.238µs grafana | logger=migrator t=2024-08-13T17:02:01.562089739Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" grafana | logger=migrator t=2024-08-13T17:02:01.572490591Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=10.401052ms grafana | logger=migrator t=2024-08-13T17:02:01.575529988Z level=info msg="Executing migration" id="Create annotation_tag table v3" grafana | logger=migrator t=2024-08-13T17:02:01.576065513Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=535.595µs grafana | logger=migrator t=2024-08-13T17:02:01.580301141Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" grafana | logger=migrator t=2024-08-13T17:02:01.582192098Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=1.895507ms grafana | logger=migrator t=2024-08-13T17:02:01.59253376Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" grafana | logger=migrator t=2024-08-13T17:02:01.592995685Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=452.735µs grafana | logger=migrator t=2024-08-13T17:02:01.596548166Z level=info msg="Executing migration" id="drop table annotation_tag_v2" grafana | logger=migrator t=2024-08-13T17:02:01.597374483Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=826.377µs grafana | logger=migrator t=2024-08-13T17:02:01.609104537Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" grafana | logger=migrator t=2024-08-13T17:02:01.609328169Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=224.912µs grafana | logger=migrator t=2024-08-13T17:02:01.613211655Z level=info msg="Executing migration" id="Add created time to annotation table" grafana | logger=migrator t=2024-08-13T17:02:01.617427852Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=4.216017ms grafana | logger=migrator t=2024-08-13T17:02:01.620450309Z level=info msg="Executing migration" id="Add updated time to annotation table" grafana | logger=migrator t=2024-08-13T17:02:01.624754928Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=4.303779ms grafana | logger=migrator t=2024-08-13T17:02:01.630799401Z level=info msg="Executing migration" id="Add index for created in annotation table" grafana | logger=migrator t=2024-08-13T17:02:01.632420776Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=1.620735ms grafana | logger=migrator t=2024-08-13T17:02:01.635629485Z level=info msg="Executing migration" id="Add index for updated in annotation table" grafana | logger=migrator t=2024-08-13T17:02:01.636620313Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=990.848µs grafana | logger=migrator t=2024-08-13T17:02:01.640135924Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" grafana | logger=migrator t=2024-08-13T17:02:01.640373737Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=238.883µs grafana | logger=migrator t=2024-08-13T17:02:01.642583566Z level=info msg="Executing migration" id="Add epoch_end column" grafana | logger=migrator t=2024-08-13T17:02:01.646843255Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=4.257899ms grafana | logger=migrator t=2024-08-13T17:02:01.649417798Z level=info msg="Executing migration" id="Add index for epoch_end" grafana | logger=migrator t=2024-08-13T17:02:01.650421697Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=991.478µs grafana | logger=migrator t=2024-08-13T17:02:01.655964916Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" grafana | logger=migrator t=2024-08-13T17:02:01.656140028Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=175.542µs grafana | logger=migrator t=2024-08-13T17:02:01.65865935Z level=info msg="Executing migration" id="Move region to single row" grafana | logger=migrator t=2024-08-13T17:02:01.659066403Z level=info msg="Migration successfully executed" id="Move region to single row" duration=407.023µs grafana | logger=migrator t=2024-08-13T17:02:01.661750077Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" grafana | logger=migrator t=2024-08-13T17:02:01.662723207Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=965.76µs grafana | logger=migrator t=2024-08-13T17:02:01.666145017Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" grafana | logger=migrator t=2024-08-13T17:02:01.667089315Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=951.428µs grafana | logger=migrator t=2024-08-13T17:02:01.670845028Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" grafana | logger=migrator t=2024-08-13T17:02:01.671819448Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=966.92µs grafana | logger=migrator t=2024-08-13T17:02:01.674359581Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" grafana | logger=migrator t=2024-08-13T17:02:01.675239228Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=879.217µs grafana | logger=migrator t=2024-08-13T17:02:01.678090104Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" grafana | logger=migrator t=2024-08-13T17:02:01.678979511Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=889.667µs grafana | logger=migrator t=2024-08-13T17:02:01.681646755Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" grafana | logger=migrator t=2024-08-13T17:02:01.682514914Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=867.799µs grafana | logger=migrator t=2024-08-13T17:02:01.686015954Z level=info msg="Executing migration" id="Increase tags column to length 4096" grafana | logger=migrator t=2024-08-13T17:02:01.686085745Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=70.451µs grafana | logger=migrator t=2024-08-13T17:02:01.689404064Z level=info msg="Executing migration" id="create test_data table" grafana | logger=migrator t=2024-08-13T17:02:01.690243281Z level=info msg="Migration successfully executed" id="create test_data table" duration=839.297µs grafana | logger=migrator t=2024-08-13T17:02:01.692989326Z level=info msg="Executing migration" id="create dashboard_version table v1" grafana | logger=migrator t=2024-08-13T17:02:01.693791604Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=802.178µs grafana | logger=migrator t=2024-08-13T17:02:01.696629409Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" grafana | logger=migrator t=2024-08-13T17:02:01.69790882Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=1.279261ms grafana | logger=migrator t=2024-08-13T17:02:01.703663122Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" grafana | logger=migrator t=2024-08-13T17:02:01.704524959Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=861.747µs grafana | logger=migrator t=2024-08-13T17:02:01.709175551Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" grafana | logger=migrator t=2024-08-13T17:02:01.709355433Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=179.982µs grafana | logger=migrator t=2024-08-13T17:02:01.712672492Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" grafana | logger=migrator t=2024-08-13T17:02:01.713018675Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=346.503µs grafana | logger=migrator t=2024-08-13T17:02:01.715234705Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" grafana | logger=migrator t=2024-08-13T17:02:01.715517167Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=288.323µs grafana | logger=migrator t=2024-08-13T17:02:01.719749765Z level=info msg="Executing migration" id="create team table" grafana | logger=migrator t=2024-08-13T17:02:01.720793974Z level=info msg="Migration successfully executed" id="create team table" duration=1.043979ms grafana | logger=migrator t=2024-08-13T17:02:01.724334387Z level=info msg="Executing migration" id="add index team.org_id" grafana | logger=migrator t=2024-08-13T17:02:01.725147874Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=813.557µs grafana | logger=migrator t=2024-08-13T17:02:01.728689775Z level=info msg="Executing migration" id="add unique index team_org_id_name" grafana | logger=migrator t=2024-08-13T17:02:01.729411101Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=724.096µs grafana | logger=migrator t=2024-08-13T17:02:01.733445978Z level=info msg="Executing migration" id="Add column uid in team" grafana | logger=migrator t=2024-08-13T17:02:01.738059788Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=4.613ms grafana | logger=migrator t=2024-08-13T17:02:01.746725375Z level=info msg="Executing migration" id="Update uid column values in team" grafana | logger=migrator t=2024-08-13T17:02:01.747008589Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=292.984µs grafana | logger=migrator t=2024-08-13T17:02:01.750369169Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" grafana | logger=migrator t=2024-08-13T17:02:01.751975032Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.606063ms grafana | logger=migrator t=2024-08-13T17:02:01.789031424Z level=info msg="Executing migration" id="create team member table" grafana | logger=migrator t=2024-08-13T17:02:01.790403516Z level=info msg="Migration successfully executed" id="create team member table" duration=1.370992ms grafana | logger=migrator t=2024-08-13T17:02:01.801119421Z level=info msg="Executing migration" id="add index team_member.org_id" grafana | logger=migrator t=2024-08-13T17:02:01.80314527Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=2.031379ms grafana | logger=migrator t=2024-08-13T17:02:01.807900861Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" grafana | logger=migrator t=2024-08-13T17:02:01.808948441Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=1.04758ms grafana | logger=migrator t=2024-08-13T17:02:01.817873101Z level=info msg="Executing migration" id="add index team_member.team_id" grafana | logger=migrator t=2024-08-13T17:02:01.81886926Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=996.199µs grafana | logger=migrator t=2024-08-13T17:02:01.822952187Z level=info msg="Executing migration" id="Add column email to team table" grafana | logger=migrator t=2024-08-13T17:02:01.827819669Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=4.866402ms grafana | logger=migrator t=2024-08-13T17:02:01.831347021Z level=info msg="Executing migration" id="Add column external to team_member table" grafana | logger=migrator t=2024-08-13T17:02:01.836321475Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=4.973234ms grafana | logger=migrator t=2024-08-13T17:02:01.840027309Z level=info msg="Executing migration" id="Add column permission to team_member table" grafana | logger=migrator t=2024-08-13T17:02:01.844467908Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=4.440589ms grafana | logger=migrator t=2024-08-13T17:02:01.847523085Z level=info msg="Executing migration" id="create dashboard acl table" grafana | logger=migrator t=2024-08-13T17:02:01.848407994Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=883.449µs grafana | logger=migrator t=2024-08-13T17:02:01.852660811Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" grafana | logger=migrator t=2024-08-13T17:02:01.854056664Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.396373ms grafana | logger=migrator t=2024-08-13T17:02:01.859583253Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" grafana | logger=migrator t=2024-08-13T17:02:01.86154801Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.956027ms grafana | logger=migrator t=2024-08-13T17:02:01.867455054Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" grafana | logger=migrator t=2024-08-13T17:02:01.868749835Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=1.288081ms grafana | logger=migrator t=2024-08-13T17:02:01.872033304Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" grafana | logger=migrator t=2024-08-13T17:02:01.872741Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=707.796µs grafana | logger=migrator t=2024-08-13T17:02:01.876102041Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" grafana | logger=migrator t=2024-08-13T17:02:01.876827297Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=725.786µs grafana | logger=migrator t=2024-08-13T17:02:01.882144964Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" grafana | logger=migrator t=2024-08-13T17:02:01.883271744Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=1.12863ms grafana | logger=migrator t=2024-08-13T17:02:01.886422092Z level=info msg="Executing migration" id="add index dashboard_permission" grafana | logger=migrator t=2024-08-13T17:02:01.887612963Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.188961ms grafana | logger=migrator t=2024-08-13T17:02:01.892774169Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" grafana | logger=migrator t=2024-08-13T17:02:01.893603467Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=828.518µs grafana | logger=migrator t=2024-08-13T17:02:01.901346836Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" grafana | logger=migrator t=2024-08-13T17:02:01.901668279Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=308.433µs grafana | logger=migrator t=2024-08-13T17:02:01.90513046Z level=info msg="Executing migration" id="create tag table" grafana | logger=migrator t=2024-08-13T17:02:01.90632142Z level=info msg="Migration successfully executed" id="create tag table" duration=1.190571ms grafana | logger=migrator t=2024-08-13T17:02:01.911257874Z level=info msg="Executing migration" id="add index tag.key_value" grafana | logger=migrator t=2024-08-13T17:02:01.912695757Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.437603ms grafana | logger=migrator t=2024-08-13T17:02:01.915842725Z level=info msg="Executing migration" id="create login attempt table" grafana | logger=migrator t=2024-08-13T17:02:01.916565431Z level=info msg="Migration successfully executed" id="create login attempt table" duration=722.686µs grafana | logger=migrator t=2024-08-13T17:02:01.920135063Z level=info msg="Executing migration" id="add index login_attempt.username" grafana | logger=migrator t=2024-08-13T17:02:01.921039942Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=905.049µs grafana | logger=migrator t=2024-08-13T17:02:01.925173628Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" grafana | logger=migrator t=2024-08-13T17:02:01.926183348Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=1.00434ms grafana | logger=migrator t=2024-08-13T17:02:01.929148434Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" grafana | logger=migrator t=2024-08-13T17:02:01.944115527Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=14.966883ms grafana | logger=migrator t=2024-08-13T17:02:01.947416377Z level=info msg="Executing migration" id="create login_attempt v2" grafana | logger=migrator t=2024-08-13T17:02:01.948439356Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=1.023379ms grafana | logger=migrator t=2024-08-13T17:02:01.95222764Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" grafana | logger=migrator t=2024-08-13T17:02:01.952904326Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=676.766µs grafana | logger=migrator t=2024-08-13T17:02:01.955527249Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" grafana | logger=migrator t=2024-08-13T17:02:01.955745841Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=218.192µs grafana | logger=migrator t=2024-08-13T17:02:01.959154121Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" grafana | logger=migrator t=2024-08-13T17:02:01.959647516Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=497.015µs grafana | logger=migrator t=2024-08-13T17:02:01.962000907Z level=info msg="Executing migration" id="create user auth table" grafana | logger=migrator t=2024-08-13T17:02:01.962564073Z level=info msg="Migration successfully executed" id="create user auth table" duration=560.885µs grafana | logger=migrator t=2024-08-13T17:02:01.965215235Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" grafana | logger=migrator t=2024-08-13T17:02:01.965928152Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=712.307µs grafana | logger=migrator t=2024-08-13T17:02:01.968563086Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" grafana | logger=migrator t=2024-08-13T17:02:01.968614396Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=52.06µs grafana | logger=migrator t=2024-08-13T17:02:01.973852523Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" grafana | logger=migrator t=2024-08-13T17:02:01.977719947Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=3.866734ms grafana | logger=migrator t=2024-08-13T17:02:01.980194749Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" grafana | logger=migrator t=2024-08-13T17:02:01.983848492Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=3.653413ms grafana | logger=migrator t=2024-08-13T17:02:01.986373744Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" grafana | logger=migrator t=2024-08-13T17:02:01.990078437Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=3.703853ms grafana | logger=migrator t=2024-08-13T17:02:01.99259586Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" grafana | logger=migrator t=2024-08-13T17:02:01.997674095Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=5.077185ms grafana | logger=migrator t=2024-08-13T17:02:02.000880943Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" grafana | logger=migrator t=2024-08-13T17:02:02.001552189Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=671.286µs grafana | logger=migrator t=2024-08-13T17:02:02.004245759Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" grafana | logger=migrator t=2024-08-13T17:02:02.008174087Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=3.927687ms grafana | logger=migrator t=2024-08-13T17:02:02.011373921Z level=info msg="Executing migration" id="create server_lock table" grafana | logger=migrator t=2024-08-13T17:02:02.012235261Z level=info msg="Migration successfully executed" id="create server_lock table" duration=861.28µs grafana | logger=migrator t=2024-08-13T17:02:02.018602804Z level=info msg="Executing migration" id="add index server_lock.operation_uid" grafana | logger=migrator t=2024-08-13T17:02:02.020231192Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=1.627558ms grafana | logger=migrator t=2024-08-13T17:02:02.025604642Z level=info msg="Executing migration" id="create user auth token table" grafana | logger=migrator t=2024-08-13T17:02:02.026523462Z level=info msg="Migration successfully executed" id="create user auth token table" duration=920.13µs grafana | logger=migrator t=2024-08-13T17:02:02.029231023Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" grafana | logger=migrator t=2024-08-13T17:02:02.030851721Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.618658ms grafana | logger=migrator t=2024-08-13T17:02:02.035127169Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" grafana | logger=migrator t=2024-08-13T17:02:02.036414134Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.281454ms grafana | logger=migrator t=2024-08-13T17:02:02.0423647Z level=info msg="Executing migration" id="add index user_auth_token.user_id" grafana | logger=migrator t=2024-08-13T17:02:02.043838577Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.478657ms grafana | logger=migrator t=2024-08-13T17:02:02.047240245Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" grafana | logger=migrator t=2024-08-13T17:02:02.052820338Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=5.581102ms grafana | logger=migrator t=2024-08-13T17:02:02.059499592Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" grafana | logger=migrator t=2024-08-13T17:02:02.060555774Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=1.056282ms grafana | logger=migrator t=2024-08-13T17:02:02.064847502Z level=info msg="Executing migration" id="create cache_data table" grafana | logger=migrator t=2024-08-13T17:02:02.066214368Z level=info msg="Migration successfully executed" id="create cache_data table" duration=1.366726ms grafana | logger=migrator t=2024-08-13T17:02:02.069602436Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" grafana | logger=migrator t=2024-08-13T17:02:02.071247363Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=1.644897ms grafana | logger=migrator t=2024-08-13T17:02:02.075538672Z level=info msg="Executing migration" id="create short_url table v1" grafana | logger=migrator t=2024-08-13T17:02:02.077482094Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=1.932502ms grafana | logger=migrator t=2024-08-13T17:02:02.081276956Z level=info msg="Executing migration" id="add index short_url.org_id-uid" grafana | logger=migrator t=2024-08-13T17:02:02.08251589Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=1.238444ms grafana | logger=migrator t=2024-08-13T17:02:02.086920399Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" grafana | logger=migrator t=2024-08-13T17:02:02.087039591Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=120.992µs grafana | logger=migrator t=2024-08-13T17:02:02.093164059Z level=info msg="Executing migration" id="delete alert_definition table" grafana | logger=migrator t=2024-08-13T17:02:02.09328138Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=120.971µs grafana | logger=migrator t=2024-08-13T17:02:02.099700032Z level=info msg="Executing migration" id="recreate alert_definition table" grafana | logger=migrator t=2024-08-13T17:02:02.10129352Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.593078ms grafana | logger=migrator t=2024-08-13T17:02:02.104931721Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" grafana | logger=migrator t=2024-08-13T17:02:02.105933693Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.002062ms grafana | logger=migrator t=2024-08-13T17:02:02.109515873Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2024-08-13T17:02:02.110730916Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.214763ms grafana | logger=migrator t=2024-08-13T17:02:02.114509719Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" grafana | logger=migrator t=2024-08-13T17:02:02.114773422Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=263.773µs grafana | logger=migrator t=2024-08-13T17:02:02.117924108Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" grafana | logger=migrator t=2024-08-13T17:02:02.118865358Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=940.64µs grafana | logger=migrator t=2024-08-13T17:02:02.121883182Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2024-08-13T17:02:02.122756433Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=873.411µs grafana | logger=migrator t=2024-08-13T17:02:02.126924239Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" grafana | logger=migrator t=2024-08-13T17:02:02.127879171Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=954.222µs grafana | logger=migrator t=2024-08-13T17:02:02.135572777Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2024-08-13T17:02:02.137111225Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.538008ms grafana | logger=migrator t=2024-08-13T17:02:02.14120579Z level=info msg="Executing migration" id="Add column paused in alert_definition" grafana | logger=migrator t=2024-08-13T17:02:02.150281954Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=9.076164ms grafana | logger=migrator t=2024-08-13T17:02:02.155549832Z level=info msg="Executing migration" id="drop alert_definition table" grafana | logger=migrator t=2024-08-13T17:02:02.157042249Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=1.492187ms grafana | logger=migrator t=2024-08-13T17:02:02.161705142Z level=info msg="Executing migration" id="delete alert_definition_version table" grafana | logger=migrator t=2024-08-13T17:02:02.161853324Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=149.182µs grafana | logger=migrator t=2024-08-13T17:02:02.2297308Z level=info msg="Executing migration" id="recreate alert_definition_version table" grafana | logger=migrator t=2024-08-13T17:02:02.230739541Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.012401ms grafana | logger=migrator t=2024-08-13T17:02:02.238129185Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" grafana | logger=migrator t=2024-08-13T17:02:02.239145436Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.016051ms grafana | logger=migrator t=2024-08-13T17:02:02.242254561Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" grafana | logger=migrator t=2024-08-13T17:02:02.24298258Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=727.809µs grafana | logger=migrator t=2024-08-13T17:02:02.246886084Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" grafana | logger=migrator t=2024-08-13T17:02:02.246938844Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=53.76µs grafana | logger=migrator t=2024-08-13T17:02:02.25008449Z level=info msg="Executing migration" id="drop alert_definition_version table" grafana | logger=migrator t=2024-08-13T17:02:02.250761817Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=676.987µs grafana | logger=migrator t=2024-08-13T17:02:02.254904475Z level=info msg="Executing migration" id="create alert_instance table" grafana | logger=migrator t=2024-08-13T17:02:02.25717075Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=2.269735ms grafana | logger=migrator t=2024-08-13T17:02:02.262401979Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" grafana | logger=migrator t=2024-08-13T17:02:02.263824715Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.426416ms grafana | logger=migrator t=2024-08-13T17:02:02.269136375Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" grafana | logger=migrator t=2024-08-13T17:02:02.270417679Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.280754ms grafana | logger=migrator t=2024-08-13T17:02:02.274048811Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" grafana | logger=migrator t=2024-08-13T17:02:02.28014934Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=6.099989ms grafana | logger=migrator t=2024-08-13T17:02:02.284799562Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" grafana | logger=migrator t=2024-08-13T17:02:02.285932305Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=1.133393ms grafana | logger=migrator t=2024-08-13T17:02:02.288713716Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" grafana | logger=migrator t=2024-08-13T17:02:02.289817398Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=1.103802ms grafana | logger=migrator t=2024-08-13T17:02:02.293794344Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" grafana | logger=migrator t=2024-08-13T17:02:02.320820028Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=27.025694ms grafana | logger=migrator t=2024-08-13T17:02:02.325095877Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" grafana | logger=migrator t=2024-08-13T17:02:02.348097146Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=23.002609ms grafana | logger=migrator t=2024-08-13T17:02:02.352412685Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" grafana | logger=migrator t=2024-08-13T17:02:02.353246165Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=832.77µs grafana | logger=migrator t=2024-08-13T17:02:02.360665959Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" grafana | logger=migrator t=2024-08-13T17:02:02.362476689Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=1.800139ms grafana | logger=migrator t=2024-08-13T17:02:02.366909079Z level=info msg="Executing migration" id="add current_reason column related to current_state" grafana | logger=migrator t=2024-08-13T17:02:02.372721324Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=5.811545ms grafana | logger=migrator t=2024-08-13T17:02:02.378062585Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" grafana | logger=migrator t=2024-08-13T17:02:02.385410077Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=7.344942ms grafana | logger=migrator t=2024-08-13T17:02:02.38916211Z level=info msg="Executing migration" id="create alert_rule table" grafana | logger=migrator t=2024-08-13T17:02:02.390458775Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.296284ms grafana | logger=migrator t=2024-08-13T17:02:02.393443288Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" grafana | logger=migrator t=2024-08-13T17:02:02.39444567Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.001232ms grafana | logger=migrator t=2024-08-13T17:02:02.401987264Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" grafana | logger=migrator t=2024-08-13T17:02:02.403583603Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.597669ms grafana | logger=migrator t=2024-08-13T17:02:02.407574908Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" grafana | logger=migrator t=2024-08-13T17:02:02.409173376Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.602788ms grafana | logger=migrator t=2024-08-13T17:02:02.41487514Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" grafana | logger=migrator t=2024-08-13T17:02:02.414949981Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=74.871µs grafana | logger=migrator t=2024-08-13T17:02:02.418469041Z level=info msg="Executing migration" id="add column for to alert_rule" grafana | logger=migrator t=2024-08-13T17:02:02.424794653Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=6.324462ms grafana | logger=migrator t=2024-08-13T17:02:02.428926759Z level=info msg="Executing migration" id="add column annotations to alert_rule" grafana | logger=migrator t=2024-08-13T17:02:02.433147437Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=4.220728ms grafana | logger=migrator t=2024-08-13T17:02:02.43694742Z level=info msg="Executing migration" id="add column labels to alert_rule" grafana | logger=migrator t=2024-08-13T17:02:02.442917777Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=5.971187ms grafana | logger=migrator t=2024-08-13T17:02:02.455435818Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" grafana | logger=migrator t=2024-08-13T17:02:02.457249709Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=1.818581ms grafana | logger=migrator t=2024-08-13T17:02:02.460562486Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" grafana | logger=migrator t=2024-08-13T17:02:02.461619849Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.056823ms grafana | logger=migrator t=2024-08-13T17:02:02.467262602Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" grafana | logger=migrator t=2024-08-13T17:02:02.47419289Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=6.930828ms grafana | logger=migrator t=2024-08-13T17:02:02.479089086Z level=info msg="Executing migration" id="add panel_id column to alert_rule" grafana | logger=migrator t=2024-08-13T17:02:02.483475634Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=4.386548ms grafana | logger=migrator t=2024-08-13T17:02:02.488478251Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" grafana | logger=migrator t=2024-08-13T17:02:02.489516094Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=1.036643ms grafana | logger=migrator t=2024-08-13T17:02:02.494234346Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" grafana | logger=migrator t=2024-08-13T17:02:02.502732902Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=8.499516ms grafana | logger=migrator t=2024-08-13T17:02:02.506460045Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" grafana | logger=migrator t=2024-08-13T17:02:02.511733794Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=5.270639ms grafana | logger=migrator t=2024-08-13T17:02:02.51487593Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" grafana | logger=migrator t=2024-08-13T17:02:02.514956031Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=80.611µs grafana | logger=migrator t=2024-08-13T17:02:02.520662265Z level=info msg="Executing migration" id="create alert_rule_version table" grafana | logger=migrator t=2024-08-13T17:02:02.521850538Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.187663ms grafana | logger=migrator t=2024-08-13T17:02:02.530104231Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" grafana | logger=migrator t=2024-08-13T17:02:02.531960033Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.854582ms grafana | logger=migrator t=2024-08-13T17:02:02.53527869Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" grafana | logger=migrator t=2024-08-13T17:02:02.536963389Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.681689ms grafana | logger=migrator t=2024-08-13T17:02:02.541142116Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" grafana | logger=migrator t=2024-08-13T17:02:02.541208907Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=67.631µs grafana | logger=migrator t=2024-08-13T17:02:02.544079389Z level=info msg="Executing migration" id="add column for to alert_rule_version" grafana | logger=migrator t=2024-08-13T17:02:02.55033042Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=6.250221ms grafana | logger=migrator t=2024-08-13T17:02:02.554917341Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" grafana | logger=migrator t=2024-08-13T17:02:02.562892022Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=7.975531ms grafana | logger=migrator t=2024-08-13T17:02:02.570005861Z level=info msg="Executing migration" id="add column labels to alert_rule_version" grafana | logger=migrator t=2024-08-13T17:02:02.576804569Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=6.798028ms grafana | logger=migrator t=2024-08-13T17:02:02.580147766Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" grafana | logger=migrator t=2024-08-13T17:02:02.586601429Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=6.453373ms grafana | logger=migrator t=2024-08-13T17:02:02.607178402Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" grafana | logger=migrator t=2024-08-13T17:02:02.618599311Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=11.429059ms grafana | logger=migrator t=2024-08-13T17:02:02.62739002Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" grafana | logger=migrator t=2024-08-13T17:02:02.627462341Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=75.171µs grafana | logger=migrator t=2024-08-13T17:02:02.634908705Z level=info msg="Executing migration" id=create_alert_configuration_table grafana | logger=migrator t=2024-08-13T17:02:02.636462772Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=1.622247ms grafana | logger=migrator t=2024-08-13T17:02:02.641111515Z level=info msg="Executing migration" id="Add column default in alert_configuration" grafana | logger=migrator t=2024-08-13T17:02:02.647994872Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=6.883287ms grafana | logger=migrator t=2024-08-13T17:02:02.650634202Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" grafana | logger=migrator t=2024-08-13T17:02:02.650737923Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=105.811µs grafana | logger=migrator t=2024-08-13T17:02:02.655919202Z level=info msg="Executing migration" id="add column org_id in alert_configuration" grafana | logger=migrator t=2024-08-13T17:02:02.665852364Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=9.929592ms grafana | logger=migrator t=2024-08-13T17:02:02.672959124Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" grafana | logger=migrator t=2024-08-13T17:02:02.674588723Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=1.634829ms grafana | logger=migrator t=2024-08-13T17:02:02.679186724Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" grafana | logger=migrator t=2024-08-13T17:02:02.686000782Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=6.813038ms grafana | logger=migrator t=2024-08-13T17:02:02.691734897Z level=info msg="Executing migration" id=create_ngalert_configuration_table grafana | logger=migrator t=2024-08-13T17:02:02.692721567Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=985.521µs grafana | logger=migrator t=2024-08-13T17:02:02.696682882Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" grafana | logger=migrator t=2024-08-13T17:02:02.697847055Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.163213ms grafana | logger=migrator t=2024-08-13T17:02:02.703974444Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" grafana | logger=migrator t=2024-08-13T17:02:02.710508329Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=6.532565ms grafana | logger=migrator t=2024-08-13T17:02:02.721653514Z level=info msg="Executing migration" id="create provenance_type table" grafana | logger=migrator t=2024-08-13T17:02:02.72306034Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=1.403986ms grafana | logger=migrator t=2024-08-13T17:02:02.726786982Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" grafana | logger=migrator t=2024-08-13T17:02:02.729449832Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=2.66175ms grafana | logger=migrator t=2024-08-13T17:02:02.733193335Z level=info msg="Executing migration" id="create alert_image table" grafana | logger=migrator t=2024-08-13T17:02:02.734108395Z level=info msg="Migration successfully executed" id="create alert_image table" duration=913.92µs grafana | logger=migrator t=2024-08-13T17:02:02.739376174Z level=info msg="Executing migration" id="add unique index on token to alert_image table" grafana | logger=migrator t=2024-08-13T17:02:02.740453286Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.075782ms grafana | logger=migrator t=2024-08-13T17:02:02.744443432Z level=info msg="Executing migration" id="support longer URLs in alert_image table" grafana | logger=migrator t=2024-08-13T17:02:02.744594743Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=151.341µs grafana | logger=migrator t=2024-08-13T17:02:02.747152032Z level=info msg="Executing migration" id=create_alert_configuration_history_table grafana | logger=migrator t=2024-08-13T17:02:02.748184304Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=1.031782ms grafana | logger=migrator t=2024-08-13T17:02:02.753081519Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" grafana | logger=migrator t=2024-08-13T17:02:02.75668033Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=3.59821ms grafana | logger=migrator t=2024-08-13T17:02:02.764903732Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" grafana | logger=migrator t=2024-08-13T17:02:02.765837053Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" grafana | logger=migrator t=2024-08-13T17:02:02.770193862Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" grafana | logger=migrator t=2024-08-13T17:02:02.770990751Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=796.839µs grafana | logger=migrator t=2024-08-13T17:02:02.776373642Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" grafana | logger=migrator t=2024-08-13T17:02:02.777478664Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.104202ms grafana | logger=migrator t=2024-08-13T17:02:02.781116546Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" grafana | logger=migrator t=2024-08-13T17:02:02.7877551Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=6.638694ms grafana | logger=migrator t=2024-08-13T17:02:02.791964218Z level=info msg="Executing migration" id="create library_element table v1" grafana | logger=migrator t=2024-08-13T17:02:02.792711826Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=747.138µs grafana | logger=migrator t=2024-08-13T17:02:02.795900012Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" grafana | logger=migrator t=2024-08-13T17:02:02.796711001Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=809.409µs grafana | logger=migrator t=2024-08-13T17:02:02.803993124Z level=info msg="Executing migration" id="create library_element_connection table v1" grafana | logger=migrator t=2024-08-13T17:02:02.804827473Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=833.489µs grafana | logger=migrator t=2024-08-13T17:02:02.809085262Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" grafana | logger=migrator t=2024-08-13T17:02:02.81075366Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.667228ms grafana | logger=migrator t=2024-08-13T17:02:02.815909618Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" grafana | logger=migrator t=2024-08-13T17:02:02.816951821Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.039043ms grafana | logger=migrator t=2024-08-13T17:02:02.822061397Z level=info msg="Executing migration" id="increase max description length to 2048" grafana | logger=migrator t=2024-08-13T17:02:02.822088978Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=27.821µs grafana | logger=migrator t=2024-08-13T17:02:02.825726829Z level=info msg="Executing migration" id="alter library_element model to mediumtext" grafana | logger=migrator t=2024-08-13T17:02:02.82582766Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=102.151µs grafana | logger=migrator t=2024-08-13T17:02:02.831385933Z level=info msg="Executing migration" id="add library_element folder uid" grafana | logger=migrator t=2024-08-13T17:02:02.840369384Z level=info msg="Migration successfully executed" id="add library_element folder uid" duration=8.979321ms grafana | logger=migrator t=2024-08-13T17:02:02.852805424Z level=info msg="Executing migration" id="populate library_element folder_uid" grafana | logger=migrator t=2024-08-13T17:02:02.854008409Z level=info msg="Migration successfully executed" id="populate library_element folder_uid" duration=1.201375ms grafana | logger=migrator t=2024-08-13T17:02:02.858898144Z level=info msg="Executing migration" id="add index library_element org_id-folder_uid-name-kind" grafana | logger=migrator t=2024-08-13T17:02:02.861149949Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_uid-name-kind" duration=2.250145ms grafana | logger=migrator t=2024-08-13T17:02:02.864432356Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" grafana | logger=migrator t=2024-08-13T17:02:02.86477061Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=337.214µs grafana | logger=migrator t=2024-08-13T17:02:02.868853756Z level=info msg="Executing migration" id="create data_keys table" grafana | logger=migrator t=2024-08-13T17:02:02.869653375Z level=info msg="Migration successfully executed" id="create data_keys table" duration=798.529µs grafana | logger=migrator t=2024-08-13T17:02:02.874837564Z level=info msg="Executing migration" id="create secrets table" grafana | logger=migrator t=2024-08-13T17:02:02.875729804Z level=info msg="Migration successfully executed" id="create secrets table" duration=891.21µs grafana | logger=migrator t=2024-08-13T17:02:02.880547328Z level=info msg="Executing migration" id="rename data_keys name column to id" grafana | logger=migrator t=2024-08-13T17:02:02.913541431Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=32.989493ms grafana | logger=migrator t=2024-08-13T17:02:02.919229605Z level=info msg="Executing migration" id="add name column into data_keys" grafana | logger=migrator t=2024-08-13T17:02:02.925331843Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=6.029598ms grafana | logger=migrator t=2024-08-13T17:02:02.93295176Z level=info msg="Executing migration" id="copy data_keys id column values into name" grafana | logger=migrator t=2024-08-13T17:02:02.933306574Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=355.574µs grafana | logger=migrator t=2024-08-13T17:02:02.938407561Z level=info msg="Executing migration" id="rename data_keys name column to label" grafana | logger=migrator t=2024-08-13T17:02:02.974509949Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=36.101188ms grafana | logger=migrator t=2024-08-13T17:02:03.003474706Z level=info msg="Executing migration" id="rename data_keys id column back to name" grafana | logger=migrator t=2024-08-13T17:02:03.035513868Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=32.031192ms grafana | logger=migrator t=2024-08-13T17:02:03.041343454Z level=info msg="Executing migration" id="create kv_store table v1" grafana | logger=migrator t=2024-08-13T17:02:03.042026501Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=682.637µs grafana | logger=migrator t=2024-08-13T17:02:03.048074219Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" grafana | logger=migrator t=2024-08-13T17:02:03.050140103Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=2.064734ms grafana | logger=migrator t=2024-08-13T17:02:03.054255299Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" grafana | logger=migrator t=2024-08-13T17:02:03.054631263Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=375.654µs grafana | logger=migrator t=2024-08-13T17:02:03.059848072Z level=info msg="Executing migration" id="create permission table" grafana | logger=migrator t=2024-08-13T17:02:03.060831834Z level=info msg="Migration successfully executed" id="create permission table" duration=983.782µs grafana | logger=migrator t=2024-08-13T17:02:03.064682627Z level=info msg="Executing migration" id="add unique index permission.role_id" grafana | logger=migrator t=2024-08-13T17:02:03.06676198Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=2.076693ms grafana | logger=migrator t=2024-08-13T17:02:03.070735405Z level=info msg="Executing migration" id="add unique index role_id_action_scope" grafana | logger=migrator t=2024-08-13T17:02:03.072569696Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.832941ms grafana | logger=migrator t=2024-08-13T17:02:03.076594931Z level=info msg="Executing migration" id="create role table" grafana | logger=migrator t=2024-08-13T17:02:03.077609223Z level=info msg="Migration successfully executed" id="create role table" duration=1.011362ms grafana | logger=migrator t=2024-08-13T17:02:03.082263745Z level=info msg="Executing migration" id="add column display_name" grafana | logger=migrator t=2024-08-13T17:02:03.091636401Z level=info msg="Migration successfully executed" id="add column display_name" duration=9.376916ms grafana | logger=migrator t=2024-08-13T17:02:03.095921189Z level=info msg="Executing migration" id="add column group_name" grafana | logger=migrator t=2024-08-13T17:02:03.104215883Z level=info msg="Migration successfully executed" id="add column group_name" duration=8.293614ms grafana | logger=migrator t=2024-08-13T17:02:03.10834528Z level=info msg="Executing migration" id="add index role.org_id" grafana | logger=migrator t=2024-08-13T17:02:03.109209779Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=865.009µs grafana | logger=migrator t=2024-08-13T17:02:03.113413946Z level=info msg="Executing migration" id="add unique index role_org_id_name" grafana | logger=migrator t=2024-08-13T17:02:03.115268597Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.852341ms grafana | logger=migrator t=2024-08-13T17:02:03.121622759Z level=info msg="Executing migration" id="add index role_org_id_uid" grafana | logger=migrator t=2024-08-13T17:02:03.122933414Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.311755ms grafana | logger=migrator t=2024-08-13T17:02:03.129220535Z level=info msg="Executing migration" id="create team role table" grafana | logger=migrator t=2024-08-13T17:02:03.130349077Z level=info msg="Migration successfully executed" id="create team role table" duration=1.127602ms grafana | logger=migrator t=2024-08-13T17:02:03.136596168Z level=info msg="Executing migration" id="add index team_role.org_id" grafana | logger=migrator t=2024-08-13T17:02:03.138241986Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.646918ms grafana | logger=migrator t=2024-08-13T17:02:03.141826408Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" grafana | logger=migrator t=2024-08-13T17:02:03.143041381Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.214033ms grafana | logger=migrator t=2024-08-13T17:02:03.14659068Z level=info msg="Executing migration" id="add index team_role.team_id" grafana | logger=migrator t=2024-08-13T17:02:03.147766404Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.176214ms grafana | logger=migrator t=2024-08-13T17:02:03.152881411Z level=info msg="Executing migration" id="create user role table" grafana | logger=migrator t=2024-08-13T17:02:03.154072495Z level=info msg="Migration successfully executed" id="create user role table" duration=1.193144ms grafana | logger=migrator t=2024-08-13T17:02:03.160594279Z level=info msg="Executing migration" id="add index user_role.org_id" grafana | logger=migrator t=2024-08-13T17:02:03.161843143Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.248614ms grafana | logger=migrator t=2024-08-13T17:02:03.16605491Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" grafana | logger=migrator t=2024-08-13T17:02:03.168131244Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=2.077274ms grafana | logger=migrator t=2024-08-13T17:02:03.17575147Z level=info msg="Executing migration" id="add index user_role.user_id" grafana | logger=migrator t=2024-08-13T17:02:03.176789612Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.039142ms grafana | logger=migrator t=2024-08-13T17:02:03.182244533Z level=info msg="Executing migration" id="create builtin role table" grafana | logger=migrator t=2024-08-13T17:02:03.183183143Z level=info msg="Migration successfully executed" id="create builtin role table" duration=937.35µs grafana | logger=migrator t=2024-08-13T17:02:03.188785487Z level=info msg="Executing migration" id="add index builtin_role.role_id" grafana | logger=migrator t=2024-08-13T17:02:03.190313575Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.527517ms grafana | logger=migrator t=2024-08-13T17:02:03.194842255Z level=info msg="Executing migration" id="add index builtin_role.name" grafana | logger=migrator t=2024-08-13T17:02:03.195975258Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.132953ms grafana | logger=migrator t=2024-08-13T17:02:03.199326456Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" grafana | logger=migrator t=2024-08-13T17:02:03.207136314Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=7.809278ms grafana | logger=migrator t=2024-08-13T17:02:03.210442401Z level=info msg="Executing migration" id="add index builtin_role.org_id" grafana | logger=migrator t=2024-08-13T17:02:03.211308431Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=865.79µs grafana | logger=migrator t=2024-08-13T17:02:03.218003507Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" grafana | logger=migrator t=2024-08-13T17:02:03.219157429Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.152622ms grafana | logger=migrator t=2024-08-13T17:02:03.224679662Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" grafana | logger=migrator t=2024-08-13T17:02:03.225852136Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.173194ms grafana | logger=migrator t=2024-08-13T17:02:03.22892124Z level=info msg="Executing migration" id="add unique index role.uid" grafana | logger=migrator t=2024-08-13T17:02:03.23071008Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.78832ms grafana | logger=migrator t=2024-08-13T17:02:03.235019379Z level=info msg="Executing migration" id="create seed assignment table" grafana | logger=migrator t=2024-08-13T17:02:03.236359544Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=1.340075ms grafana | logger=migrator t=2024-08-13T17:02:03.240836964Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" grafana | logger=migrator t=2024-08-13T17:02:03.242175809Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.338525ms grafana | logger=migrator t=2024-08-13T17:02:03.245226683Z level=info msg="Executing migration" id="add column hidden to role table" grafana | logger=migrator t=2024-08-13T17:02:03.255613031Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=10.386478ms grafana | logger=migrator t=2024-08-13T17:02:03.260645378Z level=info msg="Executing migration" id="permission kind migration" grafana | logger=migrator t=2024-08-13T17:02:03.268146943Z level=info msg="Migration successfully executed" id="permission kind migration" duration=7.487665ms grafana | logger=migrator t=2024-08-13T17:02:03.271378849Z level=info msg="Executing migration" id="permission attribute migration" grafana | logger=migrator t=2024-08-13T17:02:03.279210548Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=7.830589ms grafana | logger=migrator t=2024-08-13T17:02:03.283080321Z level=info msg="Executing migration" id="permission identifier migration" grafana | logger=migrator t=2024-08-13T17:02:03.291765919Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=8.690538ms grafana | logger=migrator t=2024-08-13T17:02:03.301641281Z level=info msg="Executing migration" id="add permission identifier index" grafana | logger=migrator t=2024-08-13T17:02:03.302916735Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.274994ms grafana | logger=migrator t=2024-08-13T17:02:03.307462346Z level=info msg="Executing migration" id="add permission action scope role_id index" grafana | logger=migrator t=2024-08-13T17:02:03.30868756Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.224534ms grafana | logger=migrator t=2024-08-13T17:02:03.311903846Z level=info msg="Executing migration" id="remove permission role_id action scope index" grafana | logger=migrator t=2024-08-13T17:02:03.313863608Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.956082ms grafana | logger=migrator t=2024-08-13T17:02:03.318177817Z level=info msg="Executing migration" id="create query_history table v1" grafana | logger=migrator t=2024-08-13T17:02:03.319756545Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=1.578788ms grafana | logger=migrator t=2024-08-13T17:02:03.324549199Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" grafana | logger=migrator t=2024-08-13T17:02:03.325739552Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.192713ms grafana | logger=migrator t=2024-08-13T17:02:03.329656657Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" grafana | logger=migrator t=2024-08-13T17:02:03.329837699Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=181.742µs grafana | logger=migrator t=2024-08-13T17:02:03.333302868Z level=info msg="Executing migration" id="rbac disabled migrator" grafana | logger=migrator t=2024-08-13T17:02:03.33348021Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=178.142µs grafana | logger=migrator t=2024-08-13T17:02:03.339037402Z level=info msg="Executing migration" id="teams permissions migration" grafana | logger=migrator t=2024-08-13T17:02:03.339616669Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=579.817µs grafana | logger=migrator t=2024-08-13T17:02:03.345719597Z level=info msg="Executing migration" id="dashboard permissions" grafana | logger=migrator t=2024-08-13T17:02:03.347166314Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=1.446957ms grafana | logger=migrator t=2024-08-13T17:02:03.352006499Z level=info msg="Executing migration" id="dashboard permissions uid scopes" grafana | logger=migrator t=2024-08-13T17:02:03.353476905Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=1.477636ms grafana | logger=migrator t=2024-08-13T17:02:03.390995178Z level=info msg="Executing migration" id="drop managed folder create actions" grafana | logger=migrator t=2024-08-13T17:02:03.391552505Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=557.597µs grafana | logger=migrator t=2024-08-13T17:02:03.395799813Z level=info msg="Executing migration" id="alerting notification permissions" grafana | logger=migrator t=2024-08-13T17:02:03.396562831Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=768.239µs grafana | logger=migrator t=2024-08-13T17:02:03.402139264Z level=info msg="Executing migration" id="create query_history_star table v1" grafana | logger=migrator t=2024-08-13T17:02:03.403240067Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=1.099953ms grafana | logger=migrator t=2024-08-13T17:02:03.410777062Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" grafana | logger=migrator t=2024-08-13T17:02:03.412356739Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.579857ms grafana | logger=migrator t=2024-08-13T17:02:03.418518799Z level=info msg="Executing migration" id="add column org_id in query_history_star" grafana | logger=migrator t=2024-08-13T17:02:03.430046099Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=11.52925ms grafana | logger=migrator t=2024-08-13T17:02:03.434310667Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" grafana | logger=migrator t=2024-08-13T17:02:03.434466749Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=156.632µs grafana | logger=migrator t=2024-08-13T17:02:03.437567984Z level=info msg="Executing migration" id="create correlation table v1" grafana | logger=migrator t=2024-08-13T17:02:03.438768348Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.199494ms grafana | logger=migrator t=2024-08-13T17:02:03.443072426Z level=info msg="Executing migration" id="add index correlations.uid" grafana | logger=migrator t=2024-08-13T17:02:03.445336782Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=2.262976ms grafana | logger=migrator t=2024-08-13T17:02:03.449357647Z level=info msg="Executing migration" id="add index correlations.source_uid" grafana | logger=migrator t=2024-08-13T17:02:03.450881594Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.523187ms grafana | logger=migrator t=2024-08-13T17:02:03.45409106Z level=info msg="Executing migration" id="add correlation config column" grafana | logger=migrator t=2024-08-13T17:02:03.464978773Z level=info msg="Migration successfully executed" id="add correlation config column" duration=10.886013ms grafana | logger=migrator t=2024-08-13T17:02:03.468980008Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" grafana | logger=migrator t=2024-08-13T17:02:03.472141404Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=3.159096ms grafana | logger=migrator t=2024-08-13T17:02:03.477921119Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" grafana | logger=migrator t=2024-08-13T17:02:03.479128533Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.207234ms grafana | logger=migrator t=2024-08-13T17:02:03.482300889Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" grafana | logger=migrator t=2024-08-13T17:02:03.50548884Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=23.187721ms grafana | logger=migrator t=2024-08-13T17:02:03.518838961Z level=info msg="Executing migration" id="create correlation v2" grafana | logger=migrator t=2024-08-13T17:02:03.520545231Z level=info msg="Migration successfully executed" id="create correlation v2" duration=1.70692ms grafana | logger=migrator t=2024-08-13T17:02:03.524879389Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" grafana | logger=migrator t=2024-08-13T17:02:03.5258072Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=927.431µs grafana | logger=migrator t=2024-08-13T17:02:03.528592252Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" grafana | logger=migrator t=2024-08-13T17:02:03.530580293Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.979651ms grafana | logger=migrator t=2024-08-13T17:02:03.535283327Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" grafana | logger=migrator t=2024-08-13T17:02:03.5365264Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.242863ms grafana | logger=migrator t=2024-08-13T17:02:03.541745859Z level=info msg="Executing migration" id="copy correlation v1 to v2" grafana | logger=migrator t=2024-08-13T17:02:03.542267865Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=521.986µs grafana | logger=migrator t=2024-08-13T17:02:03.547187551Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" grafana | logger=migrator t=2024-08-13T17:02:03.549061122Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=1.872161ms grafana | logger=migrator t=2024-08-13T17:02:03.553900117Z level=info msg="Executing migration" id="add provisioning column" grafana | logger=migrator t=2024-08-13T17:02:03.56308103Z level=info msg="Migration successfully executed" id="add provisioning column" duration=9.180013ms grafana | logger=migrator t=2024-08-13T17:02:03.566118284Z level=info msg="Executing migration" id="create entity_events table" grafana | logger=migrator t=2024-08-13T17:02:03.567062475Z level=info msg="Migration successfully executed" id="create entity_events table" duration=944.061µs grafana | logger=migrator t=2024-08-13T17:02:03.570242441Z level=info msg="Executing migration" id="create dashboard public config v1" grafana | logger=migrator t=2024-08-13T17:02:03.571348414Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.105424ms grafana | logger=migrator t=2024-08-13T17:02:03.576851086Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2024-08-13T17:02:03.577607624Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2024-08-13T17:02:03.58165709Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2024-08-13T17:02:03.582239817Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2024-08-13T17:02:03.586720137Z level=info msg="Executing migration" id="Drop old dashboard public config table" grafana | logger=migrator t=2024-08-13T17:02:03.587571247Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=850.28µs grafana | logger=migrator t=2024-08-13T17:02:03.593490014Z level=info msg="Executing migration" id="recreate dashboard public config v1" grafana | logger=migrator t=2024-08-13T17:02:03.595243673Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.75879ms grafana | logger=migrator t=2024-08-13T17:02:03.601384013Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2024-08-13T17:02:03.602553915Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.170252ms grafana | logger=migrator t=2024-08-13T17:02:03.606870475Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2024-08-13T17:02:03.60822841Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.356425ms grafana | logger=migrator t=2024-08-13T17:02:03.612266515Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" grafana | logger=migrator t=2024-08-13T17:02:03.614086186Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.81994ms grafana | logger=migrator t=2024-08-13T17:02:03.618222983Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" grafana | logger=migrator t=2024-08-13T17:02:03.619314725Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.091962ms grafana | logger=migrator t=2024-08-13T17:02:03.623877777Z level=info msg="Executing migration" id="Drop public config table" grafana | logger=migrator t=2024-08-13T17:02:03.624847507Z level=info msg="Migration successfully executed" id="Drop public config table" duration=968.52µs grafana | logger=migrator t=2024-08-13T17:02:03.629724862Z level=info msg="Executing migration" id="Recreate dashboard public config v2" grafana | logger=migrator t=2024-08-13T17:02:03.63129323Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.567738ms grafana | logger=migrator t=2024-08-13T17:02:03.636486538Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" grafana | logger=migrator t=2024-08-13T17:02:03.637501729Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.014481ms grafana | logger=migrator t=2024-08-13T17:02:03.640257841Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" grafana | logger=migrator t=2024-08-13T17:02:03.641869489Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.607858ms grafana | logger=migrator t=2024-08-13T17:02:03.647438072Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" grafana | logger=migrator t=2024-08-13T17:02:03.649202482Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.76446ms grafana | logger=migrator t=2024-08-13T17:02:03.653533641Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" grafana | logger=migrator t=2024-08-13T17:02:03.678370471Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=24.83622ms grafana | logger=migrator t=2024-08-13T17:02:03.681386275Z level=info msg="Executing migration" id="add annotations_enabled column" grafana | logger=migrator t=2024-08-13T17:02:03.687963829Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=6.577044ms grafana | logger=migrator t=2024-08-13T17:02:03.691220797Z level=info msg="Executing migration" id="add time_selection_enabled column" grafana | logger=migrator t=2024-08-13T17:02:03.69952168Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=8.293803ms grafana | logger=migrator t=2024-08-13T17:02:03.826606394Z level=info msg="Executing migration" id="delete orphaned public dashboards" grafana | logger=migrator t=2024-08-13T17:02:03.827007548Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=402.024µs grafana | logger=migrator t=2024-08-13T17:02:03.925105195Z level=info msg="Executing migration" id="add share column" grafana | logger=migrator t=2024-08-13T17:02:03.935957318Z level=info msg="Migration successfully executed" id="add share column" duration=10.860733ms grafana | logger=migrator t=2024-08-13T17:02:03.967555074Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" grafana | logger=migrator t=2024-08-13T17:02:03.973119407Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=5.564013ms grafana | logger=migrator t=2024-08-13T17:02:04.018140964Z level=info msg="Executing migration" id="create file table" grafana | logger=migrator t=2024-08-13T17:02:04.019883694Z level=info msg="Migration successfully executed" id="create file table" duration=1.74413ms grafana | logger=migrator t=2024-08-13T17:02:04.031584206Z level=info msg="Executing migration" id="file table idx: path natural pk" grafana | logger=migrator t=2024-08-13T17:02:04.033453128Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.868022ms grafana | logger=migrator t=2024-08-13T17:02:04.03994415Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" grafana | logger=migrator t=2024-08-13T17:02:04.041121314Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.175344ms grafana | logger=migrator t=2024-08-13T17:02:04.045444232Z level=info msg="Executing migration" id="create file_meta table" grafana | logger=migrator t=2024-08-13T17:02:04.046294952Z level=info msg="Migration successfully executed" id="create file_meta table" duration=850.7µs grafana | logger=migrator t=2024-08-13T17:02:04.049709791Z level=info msg="Executing migration" id="file table idx: path key" grafana | logger=migrator t=2024-08-13T17:02:04.052368641Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=2.65673ms grafana | logger=migrator t=2024-08-13T17:02:04.056140253Z level=info msg="Executing migration" id="set path collation in file table" grafana | logger=migrator t=2024-08-13T17:02:04.056377406Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=236.593µs grafana | logger=migrator t=2024-08-13T17:02:04.064638139Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" grafana | logger=migrator t=2024-08-13T17:02:04.064995303Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=356.724µs grafana | logger=migrator t=2024-08-13T17:02:04.074577861Z level=info msg="Executing migration" id="managed permissions migration" grafana | logger=migrator t=2024-08-13T17:02:04.075581913Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=1.009912ms grafana | logger=migrator t=2024-08-13T17:02:04.081498069Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" grafana | logger=migrator t=2024-08-13T17:02:04.081762552Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=264.243µs grafana | logger=migrator t=2024-08-13T17:02:04.084727545Z level=info msg="Executing migration" id="RBAC action name migrator" grafana | logger=migrator t=2024-08-13T17:02:04.086208262Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.480267ms grafana | logger=migrator t=2024-08-13T17:02:04.089216266Z level=info msg="Executing migration" id="Add UID column to playlist" grafana | logger=migrator t=2024-08-13T17:02:04.098591432Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=9.374486ms grafana | logger=migrator t=2024-08-13T17:02:04.101403934Z level=info msg="Executing migration" id="Update uid column values in playlist" grafana | logger=migrator t=2024-08-13T17:02:04.101662476Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=257.662µs grafana | logger=migrator t=2024-08-13T17:02:04.108010728Z level=info msg="Executing migration" id="Add index for uid in playlist" grafana | logger=migrator t=2024-08-13T17:02:04.110140472Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=2.130704ms grafana | logger=migrator t=2024-08-13T17:02:04.115003457Z level=info msg="Executing migration" id="update group index for alert rules" grafana | logger=migrator t=2024-08-13T17:02:04.115849596Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=846.089µs grafana | logger=migrator t=2024-08-13T17:02:04.120381098Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" grafana | logger=migrator t=2024-08-13T17:02:04.120962654Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=581.106µs grafana | logger=migrator t=2024-08-13T17:02:04.126168883Z level=info msg="Executing migration" id="admin only folder/dashboard permission" grafana | logger=migrator t=2024-08-13T17:02:04.127133953Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=960.87µs grafana | logger=migrator t=2024-08-13T17:02:04.130910267Z level=info msg="Executing migration" id="add action column to seed_assignment" grafana | logger=migrator t=2024-08-13T17:02:04.139867298Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=8.95618ms grafana | logger=migrator t=2024-08-13T17:02:04.144765593Z level=info msg="Executing migration" id="add scope column to seed_assignment" grafana | logger=migrator t=2024-08-13T17:02:04.152714142Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=7.947799ms grafana | logger=migrator t=2024-08-13T17:02:04.15956098Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" grafana | logger=migrator t=2024-08-13T17:02:04.161930147Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=2.373847ms grafana | logger=migrator t=2024-08-13T17:02:04.166633069Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" grafana | logger=migrator t=2024-08-13T17:02:04.239932505Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=73.280946ms grafana | logger=migrator t=2024-08-13T17:02:04.244411786Z level=info msg="Executing migration" id="add unique index builtin_role_name back" grafana | logger=migrator t=2024-08-13T17:02:04.245397777Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=985.281µs grafana | logger=migrator t=2024-08-13T17:02:04.248096897Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" grafana | logger=migrator t=2024-08-13T17:02:04.248985368Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=885.751µs grafana | logger=migrator t=2024-08-13T17:02:04.251481606Z level=info msg="Executing migration" id="add primary key to seed_assigment" grafana | logger=migrator t=2024-08-13T17:02:04.276200294Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=24.718298ms grafana | logger=migrator t=2024-08-13T17:02:04.280101259Z level=info msg="Executing migration" id="add origin column to seed_assignment" grafana | logger=migrator t=2024-08-13T17:02:04.287193778Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=7.087449ms grafana | logger=migrator t=2024-08-13T17:02:04.291643099Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" grafana | logger=migrator t=2024-08-13T17:02:04.291897272Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=252.613µs grafana | logger=migrator t=2024-08-13T17:02:04.298855191Z level=info msg="Executing migration" id="prevent seeding OnCall access" grafana | logger=migrator t=2024-08-13T17:02:04.299390236Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=539.736µs grafana | logger=migrator t=2024-08-13T17:02:04.307072182Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" grafana | logger=migrator t=2024-08-13T17:02:04.308493089Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=1.422127ms grafana | logger=migrator t=2024-08-13T17:02:04.312356492Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" grafana | logger=migrator t=2024-08-13T17:02:04.312829188Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=467.956µs grafana | logger=migrator t=2024-08-13T17:02:04.316706562Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" grafana | logger=migrator t=2024-08-13T17:02:04.317107606Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=400.824µs grafana | logger=migrator t=2024-08-13T17:02:04.323281445Z level=info msg="Executing migration" id="create folder table" grafana | logger=migrator t=2024-08-13T17:02:04.324463929Z level=info msg="Migration successfully executed" id="create folder table" duration=1.182424ms grafana | logger=migrator t=2024-08-13T17:02:04.329661268Z level=info msg="Executing migration" id="Add index for parent_uid" grafana | logger=migrator t=2024-08-13T17:02:04.331843662Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=2.180914ms grafana | logger=migrator t=2024-08-13T17:02:04.335319752Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" grafana | logger=migrator t=2024-08-13T17:02:04.337613247Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=2.286675ms grafana | logger=migrator t=2024-08-13T17:02:04.342720235Z level=info msg="Executing migration" id="Update folder title length" grafana | logger=migrator t=2024-08-13T17:02:04.342769565Z level=info msg="Migration successfully executed" id="Update folder title length" duration=50.47µs grafana | logger=migrator t=2024-08-13T17:02:04.350049838Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" grafana | logger=migrator t=2024-08-13T17:02:04.352066701Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=2.002623ms grafana | logger=migrator t=2024-08-13T17:02:04.356394769Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" grafana | logger=migrator t=2024-08-13T17:02:04.358181129Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.78776ms grafana | logger=migrator t=2024-08-13T17:02:04.363700882Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" grafana | logger=migrator t=2024-08-13T17:02:04.365084247Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.382135ms grafana | logger=migrator t=2024-08-13T17:02:04.368342843Z level=info msg="Executing migration" id="Sync dashboard and folder table" grafana | logger=migrator t=2024-08-13T17:02:04.368805819Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=462.866µs grafana | logger=migrator t=2024-08-13T17:02:04.372134807Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" grafana | logger=migrator t=2024-08-13T17:02:04.37241619Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=291.253µs grafana | logger=migrator t=2024-08-13T17:02:04.376386884Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" grafana | logger=migrator t=2024-08-13T17:02:04.378578789Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=2.194715ms grafana | logger=migrator t=2024-08-13T17:02:04.382538463Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" grafana | logger=migrator t=2024-08-13T17:02:04.383810958Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=1.274565ms grafana | logger=migrator t=2024-08-13T17:02:04.388088266Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" grafana | logger=migrator t=2024-08-13T17:02:04.389675134Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=1.588248ms grafana | logger=migrator t=2024-08-13T17:02:04.394249446Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" grafana | logger=migrator t=2024-08-13T17:02:04.396286379Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=2.033053ms grafana | logger=migrator t=2024-08-13T17:02:04.399592566Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" grafana | logger=migrator t=2024-08-13T17:02:04.400769369Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.175873ms grafana | logger=migrator t=2024-08-13T17:02:04.405570124Z level=info msg="Executing migration" id="create anon_device table" grafana | logger=migrator t=2024-08-13T17:02:04.407160741Z level=info msg="Migration successfully executed" id="create anon_device table" duration=1.589137ms grafana | logger=migrator t=2024-08-13T17:02:04.411876745Z level=info msg="Executing migration" id="add unique index anon_device.device_id" grafana | logger=migrator t=2024-08-13T17:02:04.413993009Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=2.118004ms grafana | logger=migrator t=2024-08-13T17:02:04.420231819Z level=info msg="Executing migration" id="add index anon_device.updated_at" grafana | logger=migrator t=2024-08-13T17:02:04.421518693Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.285984ms grafana | logger=migrator t=2024-08-13T17:02:04.425499148Z level=info msg="Executing migration" id="create signing_key table" grafana | logger=migrator t=2024-08-13T17:02:04.426469729Z level=info msg="Migration successfully executed" id="create signing_key table" duration=970.061µs grafana | logger=migrator t=2024-08-13T17:02:04.429431232Z level=info msg="Executing migration" id="add unique index signing_key.key_id" grafana | logger=migrator t=2024-08-13T17:02:04.431410315Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.977133ms grafana | logger=migrator t=2024-08-13T17:02:04.436074778Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" grafana | logger=migrator t=2024-08-13T17:02:04.438196601Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=2.122453ms grafana | logger=migrator t=2024-08-13T17:02:04.444231509Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" grafana | logger=migrator t=2024-08-13T17:02:04.444628384Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=396.765µs grafana | logger=migrator t=2024-08-13T17:02:04.4478221Z level=info msg="Executing migration" id="Add folder_uid for dashboard" grafana | logger=migrator t=2024-08-13T17:02:04.459811125Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=11.989645ms grafana | logger=migrator t=2024-08-13T17:02:04.466138406Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" grafana | logger=migrator t=2024-08-13T17:02:04.466725652Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=591.026µs grafana | logger=migrator t=2024-08-13T17:02:04.471020712Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" grafana | logger=migrator t=2024-08-13T17:02:04.471050302Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=33.48µs grafana | logger=migrator t=2024-08-13T17:02:04.475921177Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" grafana | logger=migrator t=2024-08-13T17:02:04.477769647Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=1.84894ms grafana | logger=migrator t=2024-08-13T17:02:04.482135497Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" grafana | logger=migrator t=2024-08-13T17:02:04.482178017Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=43.45µs grafana | logger=migrator t=2024-08-13T17:02:04.486523077Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" grafana | logger=migrator t=2024-08-13T17:02:04.487879642Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.356384ms grafana | logger=migrator t=2024-08-13T17:02:04.490936006Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" grafana | logger=migrator t=2024-08-13T17:02:04.492219741Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=1.282575ms grafana | logger=migrator t=2024-08-13T17:02:04.495744151Z level=info msg="Executing migration" id="create sso_setting table" grafana | logger=migrator t=2024-08-13T17:02:04.497672063Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.931601ms grafana | logger=migrator t=2024-08-13T17:02:04.501493725Z level=info msg="Executing migration" id="copy kvstore migration status to each org" grafana | logger=migrator t=2024-08-13T17:02:04.502555437Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=1.062202ms grafana | logger=migrator t=2024-08-13T17:02:04.50554862Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" grafana | logger=migrator t=2024-08-13T17:02:04.505849144Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=300.874µs grafana | logger=migrator t=2024-08-13T17:02:04.508900509Z level=info msg="Executing migration" id="managed dashboard permissions annotation actions migration" grafana | logger=migrator t=2024-08-13T17:02:04.509612667Z level=info msg="Migration successfully executed" id="managed dashboard permissions annotation actions migration" duration=711.908µs grafana | logger=migrator t=2024-08-13T17:02:04.516037089Z level=info msg="Executing migration" id="create cloud_migration table v1" grafana | logger=migrator t=2024-08-13T17:02:04.51701424Z level=info msg="Migration successfully executed" id="create cloud_migration table v1" duration=978.871µs grafana | logger=migrator t=2024-08-13T17:02:04.521983366Z level=info msg="Executing migration" id="create cloud_migration_run table v1" grafana | logger=migrator t=2024-08-13T17:02:04.523739816Z level=info msg="Migration successfully executed" id="create cloud_migration_run table v1" duration=1.75525ms grafana | logger=migrator t=2024-08-13T17:02:04.529657583Z level=info msg="Executing migration" id="add stack_id column" grafana | logger=migrator t=2024-08-13T17:02:04.539739117Z level=info msg="Migration successfully executed" id="add stack_id column" duration=10.080324ms grafana | logger=migrator t=2024-08-13T17:02:04.543898323Z level=info msg="Executing migration" id="add region_slug column" grafana | logger=migrator t=2024-08-13T17:02:04.553641673Z level=info msg="Migration successfully executed" id="add region_slug column" duration=9.7424ms grafana | logger=migrator t=2024-08-13T17:02:04.557448306Z level=info msg="Executing migration" id="add cluster_slug column" grafana | logger=migrator t=2024-08-13T17:02:04.565334775Z level=info msg="Migration successfully executed" id="add cluster_slug column" duration=7.885819ms grafana | logger=migrator t=2024-08-13T17:02:04.602600675Z level=info msg="Executing migration" id="add migration uid column" grafana | logger=migrator t=2024-08-13T17:02:04.614471199Z level=info msg="Migration successfully executed" id="add migration uid column" duration=11.868824ms grafana | logger=migrator t=2024-08-13T17:02:04.617827007Z level=info msg="Executing migration" id="Update uid column values for migration" grafana | logger=migrator t=2024-08-13T17:02:04.617959838Z level=info msg="Migration successfully executed" id="Update uid column values for migration" duration=133.201µs grafana | logger=migrator t=2024-08-13T17:02:04.623847715Z level=info msg="Executing migration" id="Add unique index migration_uid" grafana | logger=migrator t=2024-08-13T17:02:04.624740355Z level=info msg="Migration successfully executed" id="Add unique index migration_uid" duration=893.26µs grafana | logger=migrator t=2024-08-13T17:02:04.630824974Z level=info msg="Executing migration" id="add migration run uid column" grafana | logger=migrator t=2024-08-13T17:02:04.644455277Z level=info msg="Migration successfully executed" id="add migration run uid column" duration=13.631753ms grafana | logger=migrator t=2024-08-13T17:02:04.647985447Z level=info msg="Executing migration" id="Update uid column values for migration run" grafana | logger=migrator t=2024-08-13T17:02:04.648117088Z level=info msg="Migration successfully executed" id="Update uid column values for migration run" duration=131.351µs grafana | logger=migrator t=2024-08-13T17:02:04.653702611Z level=info msg="Executing migration" id="Add unique index migration_run_uid" grafana | logger=migrator t=2024-08-13T17:02:04.655030117Z level=info msg="Migration successfully executed" id="Add unique index migration_run_uid" duration=1.327176ms grafana | logger=migrator t=2024-08-13T17:02:04.658292323Z level=info msg="Executing migration" id="alter kv_store.value to longtext" grafana | logger=migrator t=2024-08-13T17:02:04.658359874Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=67.981µs grafana | logger=migrator t=2024-08-13T17:02:04.661832063Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" grafana | logger=migrator t=2024-08-13T17:02:04.673246662Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=11.394949ms grafana | logger=migrator t=2024-08-13T17:02:04.677771823Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" grafana | logger=migrator t=2024-08-13T17:02:04.68546088Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=7.688147ms grafana | logger=migrator t=2024-08-13T17:02:04.689250822Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" grafana | logger=migrator t=2024-08-13T17:02:04.689822839Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=571.667µs grafana | logger=migrator t=2024-08-13T17:02:04.694100948Z level=info msg="Executing migration" id="managed folder permissions alerting silences actions migration" grafana | logger=migrator t=2024-08-13T17:02:04.69433741Z level=info msg="Migration successfully executed" id="managed folder permissions alerting silences actions migration" duration=236.182µs grafana | logger=migrator t=2024-08-13T17:02:04.701009435Z level=info msg="Executing migration" id="add record column to alert_rule table" grafana | logger=migrator t=2024-08-13T17:02:04.71480466Z level=info msg="Migration successfully executed" id="add record column to alert_rule table" duration=13.795685ms grafana | logger=migrator t=2024-08-13T17:02:04.720466594Z level=info msg="Executing migration" id="add record column to alert_rule_version table" grafana | logger=migrator t=2024-08-13T17:02:04.730363716Z level=info msg="Migration successfully executed" id="add record column to alert_rule_version table" duration=9.896462ms grafana | logger=migrator t=2024-08-13T17:02:04.73431799Z level=info msg="migrations completed" performed=572 skipped=0 duration=4.643494914s grafana | logger=migrator t=2024-08-13T17:02:04.734978418Z level=info msg="Unlocking database" grafana | logger=sqlstore t=2024-08-13T17:02:04.752713738Z level=info msg="Created default admin" user=admin grafana | logger=sqlstore t=2024-08-13T17:02:04.752923001Z level=info msg="Created default organization" grafana | logger=secrets t=2024-08-13T17:02:04.757810495Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 grafana | logger=plugin.angulardetectorsprovider.dynamic t=2024-08-13T17:02:04.821272471Z level=info msg="Restored cache from database" duration=498.685µs grafana | logger=plugin.store t=2024-08-13T17:02:04.823034551Z level=info msg="Loading plugins..." grafana | logger=plugins.registration t=2024-08-13T17:02:04.853714317Z level=error msg="Could not register plugin" pluginId=xychart error="plugin xychart is already registered" grafana | logger=plugins.initialization t=2024-08-13T17:02:04.853742637Z level=error msg="Could not initialize plugin" pluginId=xychart error="plugin xychart is already registered" grafana | logger=local.finder t=2024-08-13T17:02:04.853802788Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled grafana | logger=plugin.store t=2024-08-13T17:02:04.853818688Z level=info msg="Plugins loaded" count=54 duration=30.785457ms grafana | logger=query_data t=2024-08-13T17:02:04.858290308Z level=info msg="Query Service initialization" grafana | logger=live.push_http t=2024-08-13T17:02:04.862351015Z level=info msg="Live Push Gateway initialization" grafana | logger=ngalert.notifier.alertmanager org=1 t=2024-08-13T17:02:04.86998739Z level=info msg="Applying new configuration to Alertmanager" configHash=d2c56faca6af2a5772ff4253222f7386 grafana | logger=ngalert.state.manager t=2024-08-13T17:02:04.88321483Z level=info msg="Running in alternative execution of Error/NoData mode" grafana | logger=infra.usagestats.collector t=2024-08-13T17:02:04.886988852Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 grafana | logger=provisioning.datasources t=2024-08-13T17:02:04.88952186Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz grafana | logger=provisioning.alerting t=2024-08-13T17:02:04.908204571Z level=info msg="starting to provision alerting" grafana | logger=provisioning.alerting t=2024-08-13T17:02:04.908224631Z level=info msg="finished to provision alerting" grafana | logger=ngalert.state.manager t=2024-08-13T17:02:04.908528595Z level=info msg="Warming state cache for startup" grafana | logger=ngalert.multiorg.alertmanager t=2024-08-13T17:02:04.909308993Z level=info msg="Starting MultiOrg Alertmanager" grafana | logger=ngalert.state.manager t=2024-08-13T17:02:04.909321874Z level=info msg="State cache has been initialized" states=0 duration=783.938µs grafana | logger=ngalert.scheduler t=2024-08-13T17:02:04.909458296Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1 grafana | logger=ticker t=2024-08-13T17:02:04.909550457Z level=info msg=starting first_tick=2024-08-13T17:02:10Z grafana | logger=grafanaStorageLogger t=2024-08-13T17:02:04.910464597Z level=info msg="Storage starting" grafana | logger=http.server t=2024-08-13T17:02:04.915816897Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= grafana | logger=provisioning.dashboard t=2024-08-13T17:02:04.944017375Z level=info msg="starting to provision dashboards" grafana | logger=grafana.update.checker t=2024-08-13T17:02:04.979601817Z level=info msg="Update check succeeded" duration=70.147262ms grafana | logger=plugins.update.checker t=2024-08-13T17:02:04.981816701Z level=info msg="Update check succeeded" duration=72.206783ms grafana | logger=sqlstore.transactions t=2024-08-13T17:02:05.029451818Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" grafana | logger=sqlstore.transactions t=2024-08-13T17:02:05.077247567Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" grafana | logger=sqlstore.transactions t=2024-08-13T17:02:05.089385504Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" grafana | logger=plugin.angulardetectorsprovider.dynamic t=2024-08-13T17:02:05.101315898Z level=info msg="Patterns update finished" duration=79.973562ms grafana | logger=grafana-apiserver t=2024-08-13T17:02:05.163347057Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2024-08-13T17:02:05.165575183Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" grafana | logger=sqlstore.transactions t=2024-08-13T17:02:05.175649296Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" grafana | logger=provisioning.dashboard t=2024-08-13T17:02:05.312599869Z level=info msg="finished to provision dashboards" grafana | logger=infra.usagestats t=2024-08-13T17:03:09.918314871Z level=info msg="Usage stats are ready to report" =================================== ======== Logs from kafka ======== kafka | ===> User kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) kafka | ===> Configuring ... kafka | Running in Zookeeper mode... kafka | ===> Running preflight checks ... kafka | ===> Check if /var/lib/kafka/data is writable ... kafka | ===> Check if Zookeeper is healthy ... kafka | [2024-08-13 17:02:01,250] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-13 17:02:01,251] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-13 17:02:01,251] INFO Client environment:java.version=17.0.12 (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-13 17:02:01,251] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-13 17:02:01,251] INFO Client environment:java.home=/usr/lib/jvm/java-17-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-13 17:02:01,251] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/commons-validator-1.7.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/jackson-core-2.16.0.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.16.0.jar:/usr/share/java/cp-base-new/commons-io-2.11.0.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-databind-2.16.0.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.7.0-ccs.jar:/usr/share/java/cp-base-new/kafka-metadata-7.7.0-ccs.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.4.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.16.0.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.6-3.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/commons-beanutils-1.9.4.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.16.0.jar:/usr/share/java/cp-base-new/commons-digester-2.1.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.16.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/jackson-annotations-2.16.0.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.7.0.jar:/usr/share/java/cp-base-new/zookeeper-3.8.4.jar:/usr/share/java/cp-base-new/utility-belt-7.7.0-130.jar:/usr/share/java/cp-base-new/common-utils-7.7.0.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/error_prone_annotations-2.10.0.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/checker-qual-3.19.0.jar:/usr/share/java/cp-base-new/kafka-raft-7.7.0-ccs.jar:/usr/share/java/cp-base-new/kafka-server-7.7.0-ccs.jar:/usr/share/java/cp-base-new/kafka-server-common-7.7.0-ccs.jar:/usr/share/java/cp-base-new/pcollections-4.0.1.jar:/usr/share/java/cp-base-new/commons-logging-1.2.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/commons-collections-3.2.2.jar:/usr/share/java/cp-base-new/caffeine-2.9.3.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/kafka_2.13-7.7.0-ccs.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/kafka-clients-7.7.0-ccs.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.7.0-ccs.jar:/usr/share/java/cp-base-new/kafka-storage-7.7.0-ccs.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.7.0-ccs.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar:/usr/share/java/cp-base-new/jose4j-0.9.5.jar (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-13 17:02:01,251] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-13 17:02:01,252] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-13 17:02:01,252] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-13 17:02:01,252] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-13 17:02:01,252] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-13 17:02:01,252] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-13 17:02:01,252] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-13 17:02:01,252] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-13 17:02:01,252] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-13 17:02:01,252] INFO Client environment:os.memory.free=500MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-13 17:02:01,252] INFO Client environment:os.memory.max=8044MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-13 17:02:01,252] INFO Client environment:os.memory.total=512MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-13 17:02:01,255] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@43a25848 (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-13 17:02:01,258] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2024-08-13 17:02:01,263] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2024-08-13 17:02:01,269] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2024-08-13 17:02:01,280] INFO Opening socket connection to server zookeeper/172.17.0.5:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2024-08-13 17:02:01,280] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) kafka | [2024-08-13 17:02:01,286] INFO Socket connection established, initiating session, client: /172.17.0.8:34700, server: zookeeper/172.17.0.5:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2024-08-13 17:02:01,323] INFO Session establishment complete on server zookeeper/172.17.0.5:2181, session id = 0x10000027b8a0000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) kafka | [2024-08-13 17:02:01,435] INFO Session: 0x10000027b8a0000 closed (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-13 17:02:01,435] INFO EventThread shut down for session: 0x10000027b8a0000 (org.apache.zookeeper.ClientCnxn) kafka | Using log4j config /etc/kafka/log4j.properties kafka | ===> Launching ... kafka | ===> Launching kafka ... kafka | [2024-08-13 17:02:01,998] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) kafka | [2024-08-13 17:02:02,232] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2024-08-13 17:02:02,299] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) kafka | [2024-08-13 17:02:02,300] INFO starting (kafka.server.KafkaServer) kafka | [2024-08-13 17:02:02,300] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) kafka | [2024-08-13 17:02:02,312] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-08-13 17:02:02,315] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-13 17:02:02,315] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-13 17:02:02,315] INFO Client environment:java.version=17.0.12 (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-13 17:02:02,315] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-13 17:02:02,315] INFO Client environment:java.home=/usr/lib/jvm/java-17-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-13 17:02:02,315] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/connect-transforms-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/protobuf-java-3.23.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-mirror-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/netty-common-4.1.110.Final.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.16.2.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.110.Final.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/connect-api-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.16.2.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.6-3.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.16.2.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.110.Final.jar:/usr/bin/../share/java/kafka/connect-runtime-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/kafka-shell-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.110.Final.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/scala-library-2.13.12.jar:/usr/bin/../share/java/kafka/trogdor-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.110.Final.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.110.Final.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.110.Final.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.110.Final.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.110.Final.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.12.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/kafka-raft-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-tools-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-server-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-databind-2.16.2.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/opentelemetry-proto-1.0.0-alpha.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.16.2.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.16.2.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-core-2.16.2.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/kafka-clients-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/connect-json-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-13 17:02:02,316] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-13 17:02:02,316] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-13 17:02:02,316] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-13 17:02:02,316] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-13 17:02:02,316] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-13 17:02:02,316] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-13 17:02:02,316] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-13 17:02:02,316] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-13 17:02:02,316] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-13 17:02:02,316] INFO Client environment:os.memory.free=986MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-13 17:02:02,316] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-13 17:02:02,316] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-13 17:02:02,317] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@609bcfb6 (org.apache.zookeeper.ZooKeeper) kafka | [2024-08-13 17:02:02,321] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2024-08-13 17:02:02,326] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2024-08-13 17:02:02,328] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-08-13 17:02:02,331] INFO Opening socket connection to server zookeeper/172.17.0.5:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2024-08-13 17:02:02,335] INFO Socket connection established, initiating session, client: /172.17.0.8:34702, server: zookeeper/172.17.0.5:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2024-08-13 17:02:02,343] INFO Session establishment complete on server zookeeper/172.17.0.5:2181, session id = 0x10000027b8a0001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) kafka | [2024-08-13 17:02:02,346] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-08-13 17:02:02,671] INFO Cluster ID = SIMEIeI3Sp-rNGzMENX8ug (kafka.server.KafkaServer) kafka | [2024-08-13 17:02:02,733] INFO KafkaConfig values: kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 kafka | alter.config.policy.class.name = null kafka | alter.log.dirs.replication.quota.window.num = 11 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 kafka | authorizer.class.name = kafka | auto.create.topics.enable = true kafka | auto.include.jmx.reporter = true kafka | auto.leader.rebalance.enable = true kafka | background.threads = 10 kafka | broker.heartbeat.interval.ms = 2000 kafka | broker.id = 1 kafka | broker.id.generation.enable = true kafka | broker.rack = null kafka | broker.session.timeout.ms = 9000 kafka | client.quota.callback.class = null kafka | compression.type = producer kafka | connection.failed.authentication.delay.ms = 100 kafka | connections.max.idle.ms = 600000 kafka | connections.max.reauth.ms = 0 kafka | control.plane.listener.name = null kafka | controlled.shutdown.enable = true kafka | controlled.shutdown.max.retries = 3 kafka | controlled.shutdown.retry.backoff.ms = 5000 kafka | controller.listener.names = null kafka | controller.quorum.append.linger.ms = 25 kafka | controller.quorum.election.backoff.max.ms = 1000 kafka | controller.quorum.election.timeout.ms = 1000 kafka | controller.quorum.fetch.timeout.ms = 2000 kafka | controller.quorum.request.timeout.ms = 2000 kafka | controller.quorum.retry.backoff.ms = 20 kafka | controller.quorum.voters = [] kafka | controller.quota.window.num = 11 kafka | controller.quota.window.size.seconds = 1 kafka | controller.socket.timeout.ms = 30000 kafka | create.topic.policy.class.name = null kafka | default.replication.factor = 1 kafka | delegation.token.expiry.check.interval.ms = 3600000 kafka | delegation.token.expiry.time.ms = 86400000 kafka | delegation.token.master.key = null kafka | delegation.token.max.lifetime.ms = 604800000 kafka | delegation.token.secret.key = null kafka | delete.records.purgatory.purge.interval.requests = 1 kafka | delete.topic.enable = true kafka | early.start.listeners = null kafka | eligible.leader.replicas.enable = false kafka | fetch.max.bytes = 57671680 kafka | fetch.purgatory.purge.interval.requests = 1000 kafka | group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.UniformAssignor, org.apache.kafka.coordinator.group.assignor.RangeAssignor] kafka | group.consumer.heartbeat.interval.ms = 5000 kafka | group.consumer.max.heartbeat.interval.ms = 15000 kafka | group.consumer.max.session.timeout.ms = 60000 kafka | group.consumer.max.size = 2147483647 kafka | group.consumer.min.heartbeat.interval.ms = 5000 kafka | group.consumer.min.session.timeout.ms = 45000 kafka | group.consumer.session.timeout.ms = 45000 kafka | group.coordinator.new.enable = false kafka | group.coordinator.rebalance.protocols = [classic] kafka | group.coordinator.threads = 1 kafka | group.initial.rebalance.delay.ms = 3000 kafka | group.max.session.timeout.ms = 1800000 kafka | group.max.size = 2147483647 kafka | group.min.session.timeout.ms = 6000 kafka | initial.broker.registration.timeout.ms = 60000 kafka | inter.broker.listener.name = PLAINTEXT kafka | inter.broker.protocol.version = 3.7-IV4 kafka | kafka.metrics.polling.interval.secs = 10 kafka | kafka.metrics.reporters = [] kafka | leader.imbalance.check.interval.seconds = 300 kafka | leader.imbalance.per.broker.percentage = 10 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 kafka | log.cleaner.backoff.ms = 15000 kafka | log.cleaner.dedupe.buffer.size = 134217728 kafka | log.cleaner.delete.retention.ms = 86400000 kafka | log.cleaner.enable = true kafka | log.cleaner.io.buffer.load.factor = 0.9 kafka | log.cleaner.io.buffer.size = 524288 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 kafka | log.cleaner.min.cleanable.ratio = 0.5 kafka | log.cleaner.min.compaction.lag.ms = 0 kafka | log.cleaner.threads = 1 kafka | log.cleanup.policy = [delete] kafka | log.dir = /tmp/kafka-logs kafka | log.dirs = /var/lib/kafka/data kafka | log.flush.interval.messages = 9223372036854775807 kafka | log.flush.interval.ms = null kafka | log.flush.offset.checkpoint.interval.ms = 60000 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 kafka | log.index.interval.bytes = 4096 kafka | log.index.size.max.bytes = 10485760 kafka | log.local.retention.bytes = -2 kafka | log.local.retention.ms = -2 kafka | log.message.downconversion.enable = true kafka | log.message.format.version = 3.0-IV1 kafka | log.message.timestamp.after.max.ms = 9223372036854775807 kafka | log.message.timestamp.before.max.ms = 9223372036854775807 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 kafka | log.message.timestamp.type = CreateTime kafka | log.preallocate = false kafka | log.retention.bytes = -1 kafka | log.retention.check.interval.ms = 300000 kafka | log.retention.hours = 168 kafka | log.retention.minutes = null kafka | log.retention.ms = null kafka | log.roll.hours = 168 kafka | log.roll.jitter.hours = 0 kafka | log.roll.jitter.ms = null kafka | log.roll.ms = null kafka | log.segment.bytes = 1073741824 kafka | log.segment.delete.delay.ms = 60000 kafka | max.connection.creation.rate = 2147483647 kafka | max.connections = 2147483647 kafka | max.connections.per.ip = 2147483647 kafka | max.connections.per.ip.overrides = kafka | max.incremental.fetch.session.cache.slots = 1000 kafka | message.max.bytes = 1048588 kafka | metadata.log.dir = null kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 kafka | metadata.log.max.snapshot.interval.ms = 3600000 kafka | metadata.log.segment.bytes = 1073741824 kafka | metadata.log.segment.min.bytes = 8388608 kafka | metadata.log.segment.ms = 604800000 kafka | metadata.max.idle.interval.ms = 500 kafka | metadata.max.retention.bytes = 104857600 kafka | metadata.max.retention.ms = 604800000 kafka | metric.reporters = [] kafka | metrics.num.samples = 2 kafka | metrics.recording.level = INFO kafka | metrics.sample.window.ms = 30000 kafka | min.insync.replicas = 1 kafka | node.id = 1 kafka | num.io.threads = 8 kafka | num.network.threads = 3 kafka | num.partitions = 1 kafka | num.recovery.threads.per.data.dir = 1 kafka | num.replica.alter.log.dirs.threads = null kafka | num.replica.fetchers = 1 kafka | offset.metadata.max.bytes = 4096 kafka | offsets.commit.required.acks = -1 kafka | offsets.commit.timeout.ms = 5000 kafka | offsets.load.buffer.size = 5242880 kafka | offsets.retention.check.interval.ms = 600000 kafka | offsets.retention.minutes = 10080 kafka | offsets.topic.compression.codec = 0 kafka | offsets.topic.num.partitions = 50 kafka | offsets.topic.replication.factor = 1 kafka | offsets.topic.segment.bytes = 104857600 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding kafka | password.encoder.iterations = 4096 kafka | password.encoder.key.length = 128 kafka | password.encoder.keyfactory.algorithm = null kafka | password.encoder.old.secret = null kafka | password.encoder.secret = null kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder kafka | process.roles = [] kafka | producer.id.expiration.check.interval.ms = 600000 kafka | producer.id.expiration.ms = 86400000 kafka | producer.purgatory.purge.interval.requests = 1000 kafka | queued.max.request.bytes = -1 kafka | queued.max.requests = 500 kafka | quota.window.num = 11 kafka | quota.window.size.seconds = 1 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 kafka | remote.log.manager.task.interval.ms = 30000 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 kafka | remote.log.manager.task.retry.backoff.ms = 500 kafka | remote.log.manager.task.retry.jitter = 0.2 kafka | remote.log.manager.thread.pool.size = 10 kafka | remote.log.metadata.custom.metadata.max.bytes = 128 kafka | remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager kafka | remote.log.metadata.manager.class.path = null kafka | remote.log.metadata.manager.impl.prefix = rlmm.config. kafka | remote.log.metadata.manager.listener.name = null kafka | remote.log.reader.max.pending.tasks = 100 kafka | remote.log.reader.threads = 10 kafka | remote.log.storage.manager.class.name = null kafka | remote.log.storage.manager.class.path = null kafka | remote.log.storage.manager.impl.prefix = rsm.config. kafka | remote.log.storage.system.enable = false kafka | replica.fetch.backoff.ms = 1000 kafka | replica.fetch.max.bytes = 1048576 kafka | replica.fetch.min.bytes = 1 kafka | replica.fetch.response.max.bytes = 10485760 kafka | replica.fetch.wait.max.ms = 500 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 kafka | replica.lag.time.max.ms = 30000 kafka | replica.selector.class = null kafka | replica.socket.receive.buffer.bytes = 65536 kafka | replica.socket.timeout.ms = 30000 kafka | replication.quota.window.num = 11 kafka | replication.quota.window.size.seconds = 1 kafka | request.timeout.ms = 30000 kafka | reserved.broker.max.id = 1000 kafka | sasl.client.callback.handler.class = null kafka | sasl.enabled.mechanisms = [GSSAPI] kafka | sasl.jaas.config = null kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | sasl.kerberos.min.time.before.relogin = 60000 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] kafka | sasl.kerberos.service.name = null kafka | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | sasl.login.callback.handler.class = null kafka | sasl.login.class = null kafka | sasl.login.connect.timeout.ms = null kafka | sasl.login.read.timeout.ms = null kafka | sasl.login.refresh.buffer.seconds = 300 kafka | sasl.login.refresh.min.period.seconds = 60 kafka | sasl.login.refresh.window.factor = 0.8 kafka | sasl.login.refresh.window.jitter = 0.05 kafka | sasl.login.retry.backoff.max.ms = 10000 kafka | sasl.login.retry.backoff.ms = 100 kafka | sasl.mechanism.controller.protocol = GSSAPI kafka | sasl.mechanism.inter.broker.protocol = GSSAPI kafka | sasl.oauthbearer.clock.skew.seconds = 30 kafka | sasl.oauthbearer.expected.audience = null kafka | sasl.oauthbearer.expected.issuer = null kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | sasl.oauthbearer.jwks.endpoint.url = null kafka | sasl.oauthbearer.scope.claim.name = scope kafka | sasl.oauthbearer.sub.claim.name = sub kafka | sasl.oauthbearer.token.endpoint.url = null kafka | sasl.server.callback.handler.class = null kafka | sasl.server.max.receive.size = 524288 kafka | security.inter.broker.protocol = PLAINTEXT kafka | security.providers = null kafka | server.max.startup.time.ms = 9223372036854775807 kafka | socket.connection.setup.timeout.max.ms = 30000 kafka | socket.connection.setup.timeout.ms = 10000 kafka | socket.listen.backlog.size = 50 kafka | socket.receive.buffer.bytes = 102400 kafka | socket.request.max.bytes = 104857600 kafka | socket.send.buffer.bytes = 102400 kafka | ssl.allow.dn.changes = false kafka | ssl.allow.san.changes = false kafka | ssl.cipher.suites = [] kafka | ssl.client.auth = none kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | ssl.endpoint.identification.algorithm = https kafka | ssl.engine.factory.class = null kafka | ssl.key.password = null kafka | ssl.keymanager.algorithm = SunX509 kafka | ssl.keystore.certificate.chain = null kafka | ssl.keystore.key = null kafka | ssl.keystore.location = null kafka | ssl.keystore.password = null kafka | ssl.keystore.type = JKS kafka | ssl.principal.mapping.rules = DEFAULT kafka | ssl.protocol = TLSv1.3 kafka | ssl.provider = null kafka | ssl.secure.random.implementation = null kafka | ssl.trustmanager.algorithm = PKIX kafka | ssl.truststore.certificates = null kafka | ssl.truststore.location = null kafka | ssl.truststore.password = null kafka | ssl.truststore.type = JKS kafka | telemetry.max.bytes = 1048576 kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 kafka | transaction.max.timeout.ms = 900000 kafka | transaction.partition.verification.enable = true kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 kafka | transaction.state.log.load.buffer.size = 5242880 kafka | transaction.state.log.min.isr = 2 kafka | transaction.state.log.num.partitions = 50 kafka | transaction.state.log.replication.factor = 3 kafka | transaction.state.log.segment.bytes = 104857600 kafka | transactional.id.expiration.ms = 604800000 kafka | unclean.leader.election.enable = false kafka | unstable.api.versions.enable = false kafka | unstable.metadata.versions.enable = false kafka | zookeeper.clientCnxnSocket = null kafka | zookeeper.connect = zookeeper:2181 kafka | zookeeper.connection.timeout.ms = null kafka | zookeeper.max.in.flight.requests = 10 kafka | zookeeper.metadata.migration.enable = false kafka | zookeeper.metadata.migration.min.batch.size = 200 kafka | zookeeper.session.timeout.ms = 18000 kafka | zookeeper.set.acl = false kafka | zookeeper.ssl.cipher.suites = null kafka | zookeeper.ssl.client.enable = false kafka | zookeeper.ssl.crl.enable = false kafka | zookeeper.ssl.enabled.protocols = null kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS kafka | zookeeper.ssl.keystore.location = null kafka | zookeeper.ssl.keystore.password = null kafka | zookeeper.ssl.keystore.type = null kafka | zookeeper.ssl.ocsp.enable = false kafka | zookeeper.ssl.protocol = TLSv1.2 kafka | zookeeper.ssl.truststore.location = null kafka | zookeeper.ssl.truststore.password = null kafka | zookeeper.ssl.truststore.type = null kafka | (kafka.server.KafkaConfig) kafka | [2024-08-13 17:02:02,764] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2024-08-13 17:02:02,765] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2024-08-13 17:02:02,765] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2024-08-13 17:02:02,768] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2024-08-13 17:02:02,773] INFO [KafkaServer id=1] Rewriting /var/lib/kafka/data/meta.properties (kafka.server.KafkaServer) kafka | [2024-08-13 17:02:02,840] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) kafka | [2024-08-13 17:02:02,847] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) kafka | [2024-08-13 17:02:02,857] INFO Loaded 0 logs in 16ms (kafka.log.LogManager) kafka | [2024-08-13 17:02:02,859] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) kafka | [2024-08-13 17:02:02,860] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) kafka | [2024-08-13 17:02:02,870] INFO Starting the log cleaner (kafka.log.LogCleaner) kafka | [2024-08-13 17:02:02,913] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) kafka | [2024-08-13 17:02:02,928] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) kafka | [2024-08-13 17:02:02,942] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) kafka | [2024-08-13 17:02:02,967] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.NodeToControllerRequestThread) kafka | [2024-08-13 17:02:03,278] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2024-08-13 17:02:03,295] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) kafka | [2024-08-13 17:02:03,296] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2024-08-13 17:02:03,300] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) kafka | [2024-08-13 17:02:03,307] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.NodeToControllerRequestThread) kafka | [2024-08-13 17:02:03,331] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-08-13 17:02:03,333] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-08-13 17:02:03,336] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-08-13 17:02:03,338] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-08-13 17:02:03,338] INFO [ExpirationReaper-1-RemoteFetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-08-13 17:02:03,353] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) kafka | [2024-08-13 17:02:03,355] INFO [AddPartitionsToTxnSenderThread-1]: Starting (kafka.server.AddPartitionsToTxnManager) kafka | [2024-08-13 17:02:03,382] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) kafka | [2024-08-13 17:02:03,406] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1723568523395,1723568523395,1,0,0,72057604700504065,258,0,27 kafka | (kafka.zk.KafkaZkClient) kafka | [2024-08-13 17:02:03,407] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) kafka | [2024-08-13 17:02:03,444] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) kafka | [2024-08-13 17:02:03,450] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-08-13 17:02:03,456] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-08-13 17:02:03,457] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-08-13 17:02:03,464] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) kafka | [2024-08-13 17:02:03,474] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-13 17:02:03,474] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) kafka | [2024-08-13 17:02:03,478] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-13 17:02:03,483] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) kafka | [2024-08-13 17:02:03,489] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) kafka | [2024-08-13 17:02:03,492] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2024-08-13 17:02:03,500] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) kafka | [2024-08-13 17:02:03,500] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2024-08-13 17:02:03,542] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) kafka | [2024-08-13 17:02:03,543] INFO [MetadataCache brokerId=1] Updated cache from existing None to latest Features(version=3.7-IV4, finalizedFeatures={}, finalizedFeaturesEpoch=0). (kafka.server.metadata.ZkMetadataCache) kafka | [2024-08-13 17:02:03,549] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) kafka | [2024-08-13 17:02:03,553] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-08-13 17:02:03,556] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) kafka | [2024-08-13 17:02:03,560] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) kafka | [2024-08-13 17:02:03,576] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) kafka | [2024-08-13 17:02:03,586] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) kafka | [2024-08-13 17:02:03,591] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) kafka | [2024-08-13 17:02:03,591] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) kafka | [2024-08-13 17:02:03,596] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) kafka | [2024-08-13 17:02:03,598] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) kafka | [2024-08-13 17:02:03,599] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) kafka | [2024-08-13 17:02:03,607] INFO Kafka version: 7.7.0-ccs (org.apache.kafka.common.utils.AppInfoParser) kafka | [2024-08-13 17:02:03,607] INFO Kafka commitId: 342a7370342e6bbcecbdf171dbe71cf87ce67c49 (org.apache.kafka.common.utils.AppInfoParser) kafka | [2024-08-13 17:02:03,607] INFO Kafka startTimeMs: 1723568523601 (org.apache.kafka.common.utils.AppInfoParser) kafka | [2024-08-13 17:02:03,608] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) kafka | [2024-08-13 17:02:03,608] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) kafka | [2024-08-13 17:02:03,609] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) kafka | [2024-08-13 17:02:03,610] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) kafka | [2024-08-13 17:02:03,610] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) kafka | [2024-08-13 17:02:03,610] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) kafka | [2024-08-13 17:02:03,614] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) kafka | [2024-08-13 17:02:03,614] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) kafka | [2024-08-13 17:02:03,614] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) kafka | [2024-08-13 17:02:03,615] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) kafka | [2024-08-13 17:02:03,616] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) kafka | [2024-08-13 17:02:03,619] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) kafka | [2024-08-13 17:02:03,625] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) kafka | [2024-08-13 17:02:03,626] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) kafka | [2024-08-13 17:02:03,629] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) kafka | [2024-08-13 17:02:03,629] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) kafka | [2024-08-13 17:02:03,630] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) kafka | [2024-08-13 17:02:03,630] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) kafka | [2024-08-13 17:02:03,633] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) kafka | [2024-08-13 17:02:03,633] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) kafka | [2024-08-13 17:02:03,640] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) kafka | [2024-08-13 17:02:03,641] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) kafka | [2024-08-13 17:02:03,641] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) kafka | [2024-08-13 17:02:03,642] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) kafka | [2024-08-13 17:02:03,643] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) kafka | [2024-08-13 17:02:03,643] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) kafka | [2024-08-13 17:02:03,656] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) kafka | [2024-08-13 17:02:03,726] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-08-13 17:02:03,726] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new ZK controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.NodeToControllerRequestThread) kafka | [2024-08-13 17:02:03,781] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new ZK controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.NodeToControllerRequestThread) kafka | [2024-08-13 17:02:08,659] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) kafka | [2024-08-13 17:02:08,660] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) kafka | [2024-08-13 17:02:33,374] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) kafka | [2024-08-13 17:02:33,380] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2024-08-13 17:02:33,382] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2024-08-13 17:02:33,389] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) kafka | [2024-08-13 17:02:33,444] INFO [Controller id=1] New topics: [Set(policy-pdp-pap)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(8_d_0rTER36cD8Dt9GfBNg),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) kafka | [2024-08-13 17:02:33,445] INFO [Controller id=1] New partition creation callback for policy-pdp-pap-0 (kafka.controller.KafkaController) kafka | [2024-08-13 17:02:33,447] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-13 17:02:33,447] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2024-08-13 17:02:33,451] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-13 17:02:33,451] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2024-08-13 17:02:33,549] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-13 17:02:33,552] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) kafka | [2024-08-13 17:02:33,552] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 1 become-leader and 0 become-follower partitions (state.change.logger) kafka | [2024-08-13 17:02:33,558] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 1 partitions (state.change.logger) kafka | [2024-08-13 17:02:33,558] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-13 17:02:33,558] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2024-08-13 17:02:33,566] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 1 partitions (state.change.logger) kafka | [2024-08-13 17:02:33,567] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-13 17:02:33,577] INFO [Controller id=1] New topics: [Set(__consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(__consumer_offsets,Some(iPwjcBAySjKFnBr1jk9GiA),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) kafka | [2024-08-13 17:02:33,577] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-37,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) kafka | [2024-08-13 17:02:33,579] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-13 17:02:33,581] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-13 17:02:33,581] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-13 17:02:33,581] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-13 17:02:33,581] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-13 17:02:33,581] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-13 17:02:33,581] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-13 17:02:33,581] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-13 17:02:33,581] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-13 17:02:33,581] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-13 17:02:33,581] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-13 17:02:33,581] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-13 17:02:33,581] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-13 17:02:33,581] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-13 17:02:33,581] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-13 17:02:33,581] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-13 17:02:33,581] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-13 17:02:33,581] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-13 17:02:33,581] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-13 17:02:33,582] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-13 17:02:33,582] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-13 17:02:33,582] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-13 17:02:33,582] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-13 17:02:33,582] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-13 17:02:33,582] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-13 17:02:33,582] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-13 17:02:33,582] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-13 17:02:33,582] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-13 17:02:33,582] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-13 17:02:33,582] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-13 17:02:33,582] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-13 17:02:33,582] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-13 17:02:33,582] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-13 17:02:33,582] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-13 17:02:33,582] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-13 17:02:33,582] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-13 17:02:33,582] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-13 17:02:33,582] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-13 17:02:33,582] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-13 17:02:33,582] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-13 17:02:33,582] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-13 17:02:33,582] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-13 17:02:33,583] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-13 17:02:33,583] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-13 17:02:33,583] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-13 17:02:33,583] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-13 17:02:33,583] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-13 17:02:33,583] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-13 17:02:33,583] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-13 17:02:33,584] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-08-13 17:02:33,584] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2024-08-13 17:02:33,588] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-13 17:02:33,588] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-13 17:02:33,589] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-13 17:02:33,589] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-13 17:02:33,589] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-13 17:02:33,589] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-13 17:02:33,589] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-13 17:02:33,589] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-13 17:02:33,590] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-13 17:02:33,590] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-13 17:02:33,590] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-13 17:02:33,590] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-13 17:02:33,590] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-13 17:02:33,591] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-13 17:02:33,591] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-13 17:02:33,591] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-13 17:02:33,591] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-13 17:02:33,591] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-13 17:02:33,591] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-13 17:02:33,592] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-13 17:02:33,592] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-13 17:02:33,592] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-13 17:02:33,592] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-13 17:02:33,592] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-13 17:02:33,592] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-13 17:02:33,592] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-13 17:02:33,593] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-13 17:02:33,593] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-13 17:02:33,593] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-13 17:02:33,593] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-13 17:02:33,593] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-13 17:02:33,593] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-13 17:02:33,594] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-13 17:02:33,594] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-13 17:02:33,594] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-13 17:02:33,594] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-13 17:02:33,595] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-13 17:02:33,595] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-13 17:02:33,595] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-13 17:02:33,595] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-13 17:02:33,596] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-13 17:02:33,596] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-13 17:02:33,596] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-13 17:02:33,596] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-13 17:02:33,596] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-13 17:02:33,598] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2024-08-13 17:02:33,606] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-13 17:02:33,606] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-13 17:02:33,606] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-13 17:02:33,606] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-13 17:02:33,607] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-08-13 17:02:33,607] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2024-08-13 17:02:33,609] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(policy-pdp-pap-0) (kafka.server.ReplicaFetcherManager) kafka | [2024-08-13 17:02:33,611] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 1 partitions (state.change.logger) kafka | [2024-08-13 17:02:33,714] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-13 17:02:33,728] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) kafka | [2024-08-13 17:02:33,730] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:33,732] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:33,734] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(8_d_0rTER36cD8Dt9GfBNg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-13 17:02:33,752] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2024-08-13 17:02:33,764] INFO [Broker id=1] Finished LeaderAndIsr request in 200ms correlationId 1 from controller 1 for 1 partitions (state.change.logger) kafka | [2024-08-13 17:02:33,765] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-13 17:02:33,765] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-13 17:02:33,766] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-13 17:02:33,766] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-13 17:02:33,766] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-13 17:02:33,766] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-13 17:02:33,766] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-13 17:02:33,766] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-13 17:02:33,766] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-13 17:02:33,766] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-13 17:02:33,766] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-13 17:02:33,766] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-13 17:02:33,766] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-13 17:02:33,766] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-13 17:02:33,766] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-13 17:02:33,766] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-13 17:02:33,766] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-13 17:02:33,766] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-13 17:02:33,766] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-13 17:02:33,766] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-13 17:02:33,766] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-13 17:02:33,766] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-13 17:02:33,766] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-13 17:02:33,766] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-13 17:02:33,766] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-13 17:02:33,767] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-13 17:02:33,767] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-13 17:02:33,767] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-13 17:02:33,767] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-13 17:02:33,767] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-13 17:02:33,767] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-13 17:02:33,767] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-13 17:02:33,767] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-13 17:02:33,767] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-13 17:02:33,767] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-13 17:02:33,767] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-13 17:02:33,767] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-13 17:02:33,767] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-13 17:02:33,767] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-13 17:02:33,767] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-13 17:02:33,767] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-13 17:02:33,767] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-13 17:02:33,767] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-13 17:02:33,767] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-13 17:02:33,767] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-13 17:02:33,767] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-13 17:02:33,767] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-13 17:02:33,767] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-13 17:02:33,768] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-13 17:02:33,768] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-08-13 17:02:33,768] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) kafka | [2024-08-13 17:02:33,768] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) kafka | [2024-08-13 17:02:33,768] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) kafka | [2024-08-13 17:02:33,768] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) kafka | [2024-08-13 17:02:33,768] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) kafka | [2024-08-13 17:02:33,768] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) kafka | [2024-08-13 17:02:33,768] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) kafka | [2024-08-13 17:02:33,768] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) kafka | [2024-08-13 17:02:33,768] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) kafka | [2024-08-13 17:02:33,768] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) kafka | [2024-08-13 17:02:33,768] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) kafka | [2024-08-13 17:02:33,768] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) kafka | [2024-08-13 17:02:33,768] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) kafka | [2024-08-13 17:02:33,768] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) kafka | [2024-08-13 17:02:33,768] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) kafka | [2024-08-13 17:02:33,768] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) kafka | [2024-08-13 17:02:33,768] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) kafka | [2024-08-13 17:02:33,768] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) kafka | [2024-08-13 17:02:33,768] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) kafka | [2024-08-13 17:02:33,768] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) kafka | [2024-08-13 17:02:33,768] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) kafka | [2024-08-13 17:02:33,769] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) kafka | [2024-08-13 17:02:33,769] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) kafka | [2024-08-13 17:02:33,769] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) kafka | [2024-08-13 17:02:33,769] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) kafka | [2024-08-13 17:02:33,769] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) kafka | [2024-08-13 17:02:33,769] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) kafka | [2024-08-13 17:02:33,769] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) kafka | [2024-08-13 17:02:33,769] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) kafka | [2024-08-13 17:02:33,769] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) kafka | [2024-08-13 17:02:33,769] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) kafka | [2024-08-13 17:02:33,769] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) kafka | [2024-08-13 17:02:33,769] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) kafka | [2024-08-13 17:02:33,769] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) kafka | [2024-08-13 17:02:33,769] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) kafka | [2024-08-13 17:02:33,769] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) kafka | [2024-08-13 17:02:33,769] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) kafka | [2024-08-13 17:02:33,769] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) kafka | [2024-08-13 17:02:33,769] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) kafka | [2024-08-13 17:02:33,769] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) kafka | [2024-08-13 17:02:33,769] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) kafka | [2024-08-13 17:02:33,769] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) kafka | [2024-08-13 17:02:33,769] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) kafka | [2024-08-13 17:02:33,769] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) kafka | [2024-08-13 17:02:33,770] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) kafka | [2024-08-13 17:02:33,770] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) kafka | [2024-08-13 17:02:33,770] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) kafka | [2024-08-13 17:02:33,770] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) kafka | [2024-08-13 17:02:33,770] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) kafka | [2024-08-13 17:02:33,770] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) kafka | [2024-08-13 17:02:33,770] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 50 become-leader and 0 become-follower partitions (state.change.logger) kafka | [2024-08-13 17:02:33,770] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 50 partitions (state.change.logger) kafka | [2024-08-13 17:02:33,771] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=8_d_0rTER36cD8Dt9GfBNg, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-08-13 17:02:33,780] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-08-13 17:02:33,780] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-13 17:02:33,780] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-13 17:02:33,780] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-13 17:02:33,781] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-13 17:02:33,781] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-13 17:02:33,781] INFO [Broker id=1] Add 1 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-08-13 17:02:33,781] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-13 17:02:33,781] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-13 17:02:33,782] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-13 17:02:33,783] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-13 17:02:33,783] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-13 17:02:33,783] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-13 17:02:33,783] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-13 17:02:33,783] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-13 17:02:33,783] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-13 17:02:33,783] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-13 17:02:33,783] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-13 17:02:33,783] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-13 17:02:33,783] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-13 17:02:33,783] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-13 17:02:33,783] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-13 17:02:33,783] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-13 17:02:33,783] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-13 17:02:33,783] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-13 17:02:33,783] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-13 17:02:33,783] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-13 17:02:33,783] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-13 17:02:33,783] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-13 17:02:33,783] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-13 17:02:33,783] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-13 17:02:33,783] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-13 17:02:33,782] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-08-13 17:02:33,785] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-13 17:02:33,785] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-13 17:02:33,785] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-13 17:02:33,785] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-13 17:02:33,785] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-13 17:02:33,785] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-13 17:02:33,785] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-13 17:02:33,785] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-13 17:02:33,785] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-13 17:02:33,785] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-13 17:02:33,785] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-13 17:02:33,785] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-13 17:02:33,785] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-13 17:02:33,785] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-13 17:02:33,785] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-13 17:02:33,785] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-13 17:02:33,785] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-13 17:02:33,785] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-13 17:02:33,785] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-13 17:02:33,785] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-08-13 17:02:33,785] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2024-08-13 17:02:33,790] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 for 50 partitions (state.change.logger) kafka | [2024-08-13 17:02:33,791] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-13 17:02:33,791] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-13 17:02:33,791] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-13 17:02:33,791] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-13 17:02:33,791] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-13 17:02:33,791] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-13 17:02:33,791] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-13 17:02:33,791] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-13 17:02:33,791] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-13 17:02:33,791] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-13 17:02:33,791] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-13 17:02:33,791] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-13 17:02:33,791] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-13 17:02:33,791] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-13 17:02:33,791] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-13 17:02:33,792] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-13 17:02:33,792] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-13 17:02:33,792] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-13 17:02:33,792] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-13 17:02:33,792] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-13 17:02:33,793] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-13 17:02:33,793] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-13 17:02:33,793] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-13 17:02:33,793] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-13 17:02:33,793] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-13 17:02:33,793] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-13 17:02:33,793] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-13 17:02:33,793] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-13 17:02:33,793] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-13 17:02:33,793] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-13 17:02:33,793] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-13 17:02:33,793] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-13 17:02:33,793] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-13 17:02:33,794] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-13 17:02:33,794] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-13 17:02:33,794] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-13 17:02:33,794] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-13 17:02:33,794] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-13 17:02:33,794] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-13 17:02:33,794] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-13 17:02:33,794] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-13 17:02:33,794] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-13 17:02:33,794] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-13 17:02:33,794] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-13 17:02:33,794] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-13 17:02:33,794] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-13 17:02:33,795] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-13 17:02:33,795] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-13 17:02:33,795] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-13 17:02:33,795] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2024-08-13 17:02:33,824] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2024-08-13 17:02:33,824] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2024-08-13 17:02:33,824] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2024-08-13 17:02:33,824] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2024-08-13 17:02:33,825] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2024-08-13 17:02:33,825] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2024-08-13 17:02:33,825] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2024-08-13 17:02:33,825] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2024-08-13 17:02:33,825] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2024-08-13 17:02:33,825] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2024-08-13 17:02:33,826] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2024-08-13 17:02:33,826] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2024-08-13 17:02:33,826] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2024-08-13 17:02:33,826] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2024-08-13 17:02:33,829] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2024-08-13 17:02:33,830] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2024-08-13 17:02:33,830] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2024-08-13 17:02:33,830] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2024-08-13 17:02:33,830] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2024-08-13 17:02:33,830] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2024-08-13 17:02:33,830] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2024-08-13 17:02:33,830] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2024-08-13 17:02:33,830] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2024-08-13 17:02:33,831] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2024-08-13 17:02:33,831] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2024-08-13 17:02:33,831] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2024-08-13 17:02:33,831] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2024-08-13 17:02:33,831] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2024-08-13 17:02:33,831] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2024-08-13 17:02:33,831] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2024-08-13 17:02:33,831] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2024-08-13 17:02:33,831] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2024-08-13 17:02:33,832] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2024-08-13 17:02:33,832] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2024-08-13 17:02:33,832] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2024-08-13 17:02:33,832] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2024-08-13 17:02:33,832] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2024-08-13 17:02:33,832] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2024-08-13 17:02:33,832] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2024-08-13 17:02:33,832] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2024-08-13 17:02:33,832] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2024-08-13 17:02:33,832] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2024-08-13 17:02:33,832] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2024-08-13 17:02:33,832] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2024-08-13 17:02:33,833] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2024-08-13 17:02:33,833] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2024-08-13 17:02:33,833] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2024-08-13 17:02:33,833] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2024-08-13 17:02:33,833] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2024-08-13 17:02:33,833] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2024-08-13 17:02:33,833] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-37, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) kafka | [2024-08-13 17:02:33,834] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 3 from controller 1 epoch 1 as part of the become-leader transition for 50 partitions (state.change.logger) kafka | [2024-08-13 17:02:33,840] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-13 17:02:33,842] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-13 17:02:33,843] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:33,843] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:33,843] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(iPwjcBAySjKFnBr1jk9GiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-13 17:02:33,853] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-13 17:02:33,855] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-13 17:02:33,855] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:33,855] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:33,855] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(iPwjcBAySjKFnBr1jk9GiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-13 17:02:33,866] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-13 17:02:33,869] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-13 17:02:33,869] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:33,870] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:33,870] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(iPwjcBAySjKFnBr1jk9GiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-13 17:02:33,881] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-13 17:02:33,882] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-13 17:02:33,882] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:33,882] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:33,883] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(iPwjcBAySjKFnBr1jk9GiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-13 17:02:33,891] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-13 17:02:33,892] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-13 17:02:33,892] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:33,892] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:33,892] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(iPwjcBAySjKFnBr1jk9GiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-13 17:02:33,901] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-13 17:02:33,902] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-13 17:02:33,902] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:33,902] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:33,902] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(iPwjcBAySjKFnBr1jk9GiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-13 17:02:33,911] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-13 17:02:33,911] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-13 17:02:33,912] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:33,912] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:33,912] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(iPwjcBAySjKFnBr1jk9GiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-13 17:02:33,960] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-13 17:02:33,961] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-13 17:02:33,961] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:33,961] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:33,961] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(iPwjcBAySjKFnBr1jk9GiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-13 17:02:33,969] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-13 17:02:33,970] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-13 17:02:33,970] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:33,970] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:33,970] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(iPwjcBAySjKFnBr1jk9GiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-13 17:02:33,978] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-13 17:02:33,979] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-13 17:02:33,979] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:33,979] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:33,979] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(iPwjcBAySjKFnBr1jk9GiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-13 17:02:33,987] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-13 17:02:33,987] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-13 17:02:33,987] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:33,987] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:33,987] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(iPwjcBAySjKFnBr1jk9GiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-13 17:02:33,996] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-13 17:02:33,997] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-13 17:02:33,998] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:33,998] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:33,998] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(iPwjcBAySjKFnBr1jk9GiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-13 17:02:34,005] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-13 17:02:34,005] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-13 17:02:34,005] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,005] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,006] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(iPwjcBAySjKFnBr1jk9GiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-13 17:02:34,015] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-13 17:02:34,015] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-13 17:02:34,016] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,016] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,016] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(iPwjcBAySjKFnBr1jk9GiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-13 17:02:34,023] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-13 17:02:34,024] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-13 17:02:34,024] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,024] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,024] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(iPwjcBAySjKFnBr1jk9GiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-13 17:02:34,031] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-13 17:02:34,032] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-13 17:02:34,032] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,032] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,032] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(iPwjcBAySjKFnBr1jk9GiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-13 17:02:34,041] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-13 17:02:34,042] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-13 17:02:34,042] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,042] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,042] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(iPwjcBAySjKFnBr1jk9GiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-13 17:02:34,050] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-13 17:02:34,051] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-13 17:02:34,051] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,051] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,051] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(iPwjcBAySjKFnBr1jk9GiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-13 17:02:34,059] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-13 17:02:34,060] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-13 17:02:34,060] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,060] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,060] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(iPwjcBAySjKFnBr1jk9GiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-13 17:02:34,068] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-13 17:02:34,068] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-13 17:02:34,068] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,068] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,069] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(iPwjcBAySjKFnBr1jk9GiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-13 17:02:34,078] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-13 17:02:34,083] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-13 17:02:34,083] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,083] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,083] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(iPwjcBAySjKFnBr1jk9GiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-13 17:02:34,090] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-13 17:02:34,090] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-13 17:02:34,090] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,091] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,091] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(iPwjcBAySjKFnBr1jk9GiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-13 17:02:34,098] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-13 17:02:34,099] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-13 17:02:34,099] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,099] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,099] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(iPwjcBAySjKFnBr1jk9GiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-13 17:02:34,109] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-13 17:02:34,110] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-13 17:02:34,110] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,110] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,110] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(iPwjcBAySjKFnBr1jk9GiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-13 17:02:34,124] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-13 17:02:34,125] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-13 17:02:34,125] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,125] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,126] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(iPwjcBAySjKFnBr1jk9GiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-13 17:02:34,132] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-13 17:02:34,133] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-13 17:02:34,133] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,133] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,133] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(iPwjcBAySjKFnBr1jk9GiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-13 17:02:34,141] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-13 17:02:34,142] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-13 17:02:34,142] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,142] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,142] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(iPwjcBAySjKFnBr1jk9GiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-13 17:02:34,156] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-13 17:02:34,157] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-13 17:02:34,157] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,157] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,158] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(iPwjcBAySjKFnBr1jk9GiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-13 17:02:34,165] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-13 17:02:34,166] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-13 17:02:34,166] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,167] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,167] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(iPwjcBAySjKFnBr1jk9GiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-13 17:02:34,174] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-13 17:02:34,175] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-13 17:02:34,175] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,175] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,176] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(iPwjcBAySjKFnBr1jk9GiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-13 17:02:34,184] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-13 17:02:34,185] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-13 17:02:34,185] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,185] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,185] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(iPwjcBAySjKFnBr1jk9GiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-13 17:02:34,196] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-13 17:02:34,197] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-13 17:02:34,197] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,197] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,198] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(iPwjcBAySjKFnBr1jk9GiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-13 17:02:34,207] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-13 17:02:34,208] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-13 17:02:34,208] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,208] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,208] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(iPwjcBAySjKFnBr1jk9GiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-13 17:02:34,222] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-13 17:02:34,223] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-13 17:02:34,223] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,223] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,224] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(iPwjcBAySjKFnBr1jk9GiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-13 17:02:34,233] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-13 17:02:34,234] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-13 17:02:34,235] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,235] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,235] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(iPwjcBAySjKFnBr1jk9GiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-13 17:02:34,241] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-13 17:02:34,242] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-13 17:02:34,242] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,242] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,242] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(iPwjcBAySjKFnBr1jk9GiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-13 17:02:34,250] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-13 17:02:34,250] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-13 17:02:34,250] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,250] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,251] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(iPwjcBAySjKFnBr1jk9GiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-13 17:02:34,260] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-13 17:02:34,260] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-13 17:02:34,260] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,260] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,261] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(iPwjcBAySjKFnBr1jk9GiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-13 17:02:34,276] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-13 17:02:34,276] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-13 17:02:34,276] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,276] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,277] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(iPwjcBAySjKFnBr1jk9GiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-13 17:02:34,287] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-13 17:02:34,288] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-13 17:02:34,288] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,288] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,288] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(iPwjcBAySjKFnBr1jk9GiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-13 17:02:34,295] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-13 17:02:34,295] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-13 17:02:34,296] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,296] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,296] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(iPwjcBAySjKFnBr1jk9GiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-13 17:02:34,303] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-13 17:02:34,303] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-13 17:02:34,303] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,303] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,303] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(iPwjcBAySjKFnBr1jk9GiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-13 17:02:34,313] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-13 17:02:34,313] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-13 17:02:34,313] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,314] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,314] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(iPwjcBAySjKFnBr1jk9GiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-13 17:02:34,322] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-13 17:02:34,323] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-13 17:02:34,323] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,324] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,324] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(iPwjcBAySjKFnBr1jk9GiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-13 17:02:34,376] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-13 17:02:34,377] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-13 17:02:34,377] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,377] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,377] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(iPwjcBAySjKFnBr1jk9GiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-13 17:02:34,386] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-13 17:02:34,387] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-13 17:02:34,387] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,387] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,388] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(iPwjcBAySjKFnBr1jk9GiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-13 17:02:34,393] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-13 17:02:34,394] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-13 17:02:34,394] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,394] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,394] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(iPwjcBAySjKFnBr1jk9GiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-13 17:02:34,399] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-13 17:02:34,400] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-13 17:02:34,400] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,400] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,400] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(iPwjcBAySjKFnBr1jk9GiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-13 17:02:34,407] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-13 17:02:34,408] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-13 17:02:34,408] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,408] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,408] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(iPwjcBAySjKFnBr1jk9GiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-13 17:02:34,415] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-08-13 17:02:34,416] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-08-13 17:02:34,416] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,417] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-08-13 17:02:34,418] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(iPwjcBAySjKFnBr1jk9GiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-08-13 17:02:34,423] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2024-08-13 17:02:34,423] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2024-08-13 17:02:34,423] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2024-08-13 17:02:34,423] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2024-08-13 17:02:34,423] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2024-08-13 17:02:34,423] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2024-08-13 17:02:34,423] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2024-08-13 17:02:34,423] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2024-08-13 17:02:34,423] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2024-08-13 17:02:34,423] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2024-08-13 17:02:34,423] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2024-08-13 17:02:34,423] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2024-08-13 17:02:34,423] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2024-08-13 17:02:34,423] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2024-08-13 17:02:34,423] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2024-08-13 17:02:34,423] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2024-08-13 17:02:34,423] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2024-08-13 17:02:34,423] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2024-08-13 17:02:34,423] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2024-08-13 17:02:34,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2024-08-13 17:02:34,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2024-08-13 17:02:34,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2024-08-13 17:02:34,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2024-08-13 17:02:34,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2024-08-13 17:02:34,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2024-08-13 17:02:34,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2024-08-13 17:02:34,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2024-08-13 17:02:34,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2024-08-13 17:02:34,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2024-08-13 17:02:34,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2024-08-13 17:02:34,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2024-08-13 17:02:34,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2024-08-13 17:02:34,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2024-08-13 17:02:34,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2024-08-13 17:02:34,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2024-08-13 17:02:34,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2024-08-13 17:02:34,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2024-08-13 17:02:34,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2024-08-13 17:02:34,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2024-08-13 17:02:34,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2024-08-13 17:02:34,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2024-08-13 17:02:34,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2024-08-13 17:02:34,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2024-08-13 17:02:34,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2024-08-13 17:02:34,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2024-08-13 17:02:34,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2024-08-13 17:02:34,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2024-08-13 17:02:34,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2024-08-13 17:02:34,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2024-08-13 17:02:34,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2024-08-13 17:02:34,426] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-13 17:02:34,426] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,428] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-13 17:02:34,428] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,428] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-13 17:02:34,428] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,428] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-13 17:02:34,428] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,428] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-13 17:02:34,428] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,428] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-13 17:02:34,428] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,428] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-13 17:02:34,428] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,428] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-13 17:02:34,428] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,428] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-13 17:02:34,428] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,428] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-13 17:02:34,428] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,429] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-13 17:02:34,429] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,429] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-13 17:02:34,429] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,429] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-13 17:02:34,429] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,429] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-13 17:02:34,429] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,429] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-13 17:02:34,429] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,429] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-13 17:02:34,429] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,430] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-13 17:02:34,430] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,430] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-13 17:02:34,430] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,430] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-13 17:02:34,430] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,430] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-13 17:02:34,430] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,430] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-13 17:02:34,431] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,431] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-13 17:02:34,431] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,431] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-13 17:02:34,431] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,432] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-13 17:02:34,432] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,432] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-13 17:02:34,432] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,432] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-13 17:02:34,432] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,432] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-13 17:02:34,432] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,432] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-13 17:02:34,432] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,432] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-13 17:02:34,432] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,433] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-13 17:02:34,433] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,433] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-13 17:02:34,433] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,433] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-13 17:02:34,433] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,433] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-13 17:02:34,433] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,433] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-13 17:02:34,433] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,433] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-13 17:02:34,433] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,434] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-13 17:02:34,434] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,434] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-13 17:02:34,434] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,434] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-13 17:02:34,434] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,434] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-13 17:02:34,434] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,435] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-13 17:02:34,435] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,435] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-13 17:02:34,435] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,435] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-13 17:02:34,435] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,435] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-13 17:02:34,435] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,435] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-13 17:02:34,435] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,435] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-13 17:02:34,435] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,436] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-13 17:02:34,436] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,436] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-13 17:02:34,436] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,436] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-13 17:02:34,436] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,436] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-13 17:02:34,436] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,436] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-13 17:02:34,436] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,436] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 7 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,437] INFO [Broker id=1] Finished LeaderAndIsr request in 647ms correlationId 3 from controller 1 for 50 partitions (state.change.logger) kafka | [2024-08-13 17:02:34,437] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,437] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,437] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,438] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,438] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,438] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,438] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,438] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,438] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,438] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,438] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,439] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 10 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,439] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,439] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=iPwjcBAySjKFnBr1jk9GiA, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 3 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-08-13 17:02:34,439] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,439] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,439] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,439] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,440] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 10 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,440] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,440] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,441] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,441] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,441] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,442] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-08-13 17:02:34,442] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-08-13 17:02:34,442] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-08-13 17:02:34,442] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-08-13 17:02:34,442] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-08-13 17:02:34,442] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-08-13 17:02:34,442] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-08-13 17:02:34,442] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-08-13 17:02:34,442] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-08-13 17:02:34,442] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-08-13 17:02:34,442] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-08-13 17:02:34,442] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-08-13 17:02:34,442] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-08-13 17:02:34,442] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-08-13 17:02:34,442] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-08-13 17:02:34,442] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-08-13 17:02:34,442] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-08-13 17:02:34,442] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-08-13 17:02:34,443] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-08-13 17:02:34,443] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-08-13 17:02:34,443] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-08-13 17:02:34,443] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-08-13 17:02:34,443] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-08-13 17:02:34,443] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-08-13 17:02:34,443] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-08-13 17:02:34,443] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-08-13 17:02:34,443] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-08-13 17:02:34,443] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-08-13 17:02:34,443] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-08-13 17:02:34,443] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-08-13 17:02:34,443] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-08-13 17:02:34,443] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-08-13 17:02:34,443] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-08-13 17:02:34,443] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-08-13 17:02:34,443] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-08-13 17:02:34,443] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-08-13 17:02:34,443] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-08-13 17:02:34,443] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-08-13 17:02:34,443] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-08-13 17:02:34,443] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-08-13 17:02:34,443] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-08-13 17:02:34,443] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-08-13 17:02:34,443] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-08-13 17:02:34,443] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-08-13 17:02:34,443] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-08-13 17:02:34,443] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-08-13 17:02:34,443] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-08-13 17:02:34,443] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,443] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-08-13 17:02:34,443] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-08-13 17:02:34,443] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-08-13 17:02:34,443] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,443] INFO [Broker id=1] Add 50 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-08-13 17:02:34,443] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,444] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,444] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,444] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,444] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,444] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,444] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 4 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-08-13 17:02:34,444] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,444] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,444] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,444] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,445] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,445] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,445] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,445] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,445] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,445] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,445] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,445] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,445] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,445] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,446] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,446] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,446] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,446] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-08-13 17:02:34,554] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group c3121714-2fa5-463e-adc9-74ade2a795c3 in Empty state. Created a new member id consumer-c3121714-2fa5-463e-adc9-74ade2a795c3-3-0fc44030-f254-4048-b25e-e981afcae472 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-13 17:02:34,554] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-cf0bd346-c68a-4b8f-bcf8-c3c23ba8c621 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-13 17:02:34,566] INFO [GroupCoordinator 1]: Preparing to rebalance group c3121714-2fa5-463e-adc9-74ade2a795c3 in state PreparingRebalance with old generation 0 (__consumer_offsets-14) (reason: Adding new member consumer-c3121714-2fa5-463e-adc9-74ade2a795c3-3-0fc44030-f254-4048-b25e-e981afcae472 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-13 17:02:34,566] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-cf0bd346-c68a-4b8f-bcf8-c3c23ba8c621 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-13 17:02:34,814] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 3be38c39-57f3-4b03-8e86-61001329f2ae in Empty state. Created a new member id consumer-3be38c39-57f3-4b03-8e86-61001329f2ae-2-d13f5bde-d737-4f99-8e8b-6161463430f1 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-13 17:02:34,817] INFO [GroupCoordinator 1]: Preparing to rebalance group 3be38c39-57f3-4b03-8e86-61001329f2ae in state PreparingRebalance with old generation 0 (__consumer_offsets-49) (reason: Adding new member consumer-3be38c39-57f3-4b03-8e86-61001329f2ae-2-d13f5bde-d737-4f99-8e8b-6161463430f1 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-13 17:02:37,572] INFO [GroupCoordinator 1]: Stabilized group c3121714-2fa5-463e-adc9-74ade2a795c3 generation 1 (__consumer_offsets-14) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-13 17:02:37,576] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-13 17:02:37,600] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-cf0bd346-c68a-4b8f-bcf8-c3c23ba8c621 for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-13 17:02:37,602] INFO [GroupCoordinator 1]: Assignment received from leader consumer-c3121714-2fa5-463e-adc9-74ade2a795c3-3-0fc44030-f254-4048-b25e-e981afcae472 for group c3121714-2fa5-463e-adc9-74ade2a795c3 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-13 17:02:37,818] INFO [GroupCoordinator 1]: Stabilized group 3be38c39-57f3-4b03-8e86-61001329f2ae generation 1 (__consumer_offsets-49) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-08-13 17:02:37,832] INFO [GroupCoordinator 1]: Assignment received from leader consumer-3be38c39-57f3-4b03-8e86-61001329f2ae-2-d13f5bde-d737-4f99-8e8b-6161463430f1 for group 3be38c39-57f3-4b03-8e86-61001329f2ae for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) =================================== ======== Logs from mariadb ======== mariadb | 2024-08-13 17:01:53+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. mariadb | 2024-08-13 17:01:53+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' mariadb | 2024-08-13 17:01:53+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. mariadb | 2024-08-13 17:01:53+00:00 [Note] [Entrypoint]: Initializing database files mariadb | 2024-08-13 17:01:53 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-08-13 17:01:53 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-08-13 17:01:53 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | mariadb | mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! mariadb | To do so, start the server, then issue the following command: mariadb | mariadb | '/usr/bin/mysql_secure_installation' mariadb | mariadb | which will also give you the option of removing the test mariadb | databases and anonymous user created by default. This is mariadb | strongly recommended for production servers. mariadb | mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb mariadb | mariadb | Please report any problems at https://mariadb.org/jira mariadb | mariadb | The latest information about MariaDB is available at https://mariadb.org/. mariadb | mariadb | Consider joining MariaDB's strong and vibrant community: mariadb | https://mariadb.org/get-involved/ mariadb | mariadb | 2024-08-13 17:01:54+00:00 [Note] [Entrypoint]: Database files initialized mariadb | 2024-08-13 17:01:54+00:00 [Note] [Entrypoint]: Starting temporary server mariadb | 2024-08-13 17:01:54+00:00 [Note] [Entrypoint]: Waiting for server startup mariadb | 2024-08-13 17:01:54 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 95 ... mariadb | 2024-08-13 17:01:54 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 mariadb | 2024-08-13 17:01:54 0 [Note] InnoDB: Number of transaction pools: 1 mariadb | 2024-08-13 17:01:54 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions mariadb | 2024-08-13 17:01:54 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) mariadb | 2024-08-13 17:01:54 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-08-13 17:01:54 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-08-13 17:01:54 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB mariadb | 2024-08-13 17:01:54 0 [Note] InnoDB: Completed initialization of buffer pool mariadb | 2024-08-13 17:01:54 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) mariadb | 2024-08-13 17:01:55 0 [Note] InnoDB: 128 rollback segments are active. mariadb | 2024-08-13 17:01:55 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... mariadb | 2024-08-13 17:01:55 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. mariadb | 2024-08-13 17:01:55 0 [Note] InnoDB: log sequence number 46590; transaction id 14 mariadb | 2024-08-13 17:01:55 0 [Note] Plugin 'FEEDBACK' is disabled. mariadb | 2024-08-13 17:01:55 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | 2024-08-13 17:01:55 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. mariadb | 2024-08-13 17:01:55 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. mariadb | 2024-08-13 17:01:55 0 [Note] mariadbd: ready for connections. mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution mariadb | 2024-08-13 17:01:55+00:00 [Note] [Entrypoint]: Temporary server started. mariadb | 2024-08-13 17:01:57+00:00 [Note] [Entrypoint]: Creating user policy_user mariadb | 2024-08-13 17:01:57+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) mariadb | mariadb | 2024-08-13 17:01:57+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf mariadb | mariadb | 2024-08-13 17:01:57+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh mariadb | #!/bin/bash -xv mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. mariadb | # mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); mariadb | # you may not use this file except in compliance with the License. mariadb | # You may obtain a copy of the License at mariadb | # mariadb | # http://www.apache.org/licenses/LICENSE-2.0 mariadb | # mariadb | # Unless required by applicable law or agreed to in writing, software mariadb | # distributed under the License is distributed on an "AS IS" BASIS, mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. mariadb | # See the License for the specific language governing permissions and mariadb | # limitations under the License. mariadb | mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | do mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" mariadb | done mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp mariadb | mariadb | 2024-08-13 17:01:58+00:00 [Note] [Entrypoint]: Stopping temporary server mariadb | 2024-08-13 17:01:58 0 [Note] mariadbd (initiated by: unknown): Normal shutdown mariadb | 2024-08-13 17:01:58 0 [Note] InnoDB: FTS optimize thread exiting. mariadb | 2024-08-13 17:01:58 0 [Note] InnoDB: Starting shutdown... mariadb | 2024-08-13 17:01:58 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool mariadb | 2024-08-13 17:01:58 0 [Note] InnoDB: Buffer pool(s) dump completed at 240813 17:01:58 mariadb | 2024-08-13 17:01:59 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" mariadb | 2024-08-13 17:01:59 0 [Note] InnoDB: Shutdown completed; log sequence number 330049; transaction id 298 mariadb | 2024-08-13 17:01:59 0 [Note] mariadbd: Shutdown complete mariadb | mariadb | 2024-08-13 17:01:59+00:00 [Note] [Entrypoint]: Temporary server stopped mariadb | mariadb | 2024-08-13 17:01:59+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. mariadb | mariadb | 2024-08-13 17:01:59 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... mariadb | 2024-08-13 17:01:59 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 mariadb | 2024-08-13 17:01:59 0 [Note] InnoDB: Number of transaction pools: 1 mariadb | 2024-08-13 17:01:59 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions mariadb | 2024-08-13 17:01:59 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) mariadb | 2024-08-13 17:01:59 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-08-13 17:01:59 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-08-13 17:01:59 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB mariadb | 2024-08-13 17:01:59 0 [Note] InnoDB: Completed initialization of buffer pool mariadb | 2024-08-13 17:01:59 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) mariadb | 2024-08-13 17:01:59 0 [Note] InnoDB: 128 rollback segments are active. mariadb | 2024-08-13 17:01:59 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... mariadb | 2024-08-13 17:01:59 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. mariadb | 2024-08-13 17:01:59 0 [Note] InnoDB: log sequence number 330049; transaction id 299 mariadb | 2024-08-13 17:01:59 0 [Note] Plugin 'FEEDBACK' is disabled. mariadb | 2024-08-13 17:01:59 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool mariadb | 2024-08-13 17:01:59 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | 2024-08-13 17:01:59 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. mariadb | 2024-08-13 17:01:59 0 [Note] Server socket created on IP: '0.0.0.0'. mariadb | 2024-08-13 17:01:59 0 [Note] Server socket created on IP: '::'. mariadb | 2024-08-13 17:01:59 0 [Note] mariadbd: ready for connections. mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution mariadb | 2024-08-13 17:01:59 0 [Note] InnoDB: Buffer pool(s) load completed at 240813 17:01:59 mariadb | 2024-08-13 17:01:59 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.7' (This connection closed normally without authentication) mariadb | 2024-08-13 17:01:59 4 [Warning] Aborted connection 4 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.10' (This connection closed normally without authentication) mariadb | 2024-08-13 17:01:59 5 [Warning] Aborted connection 5 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.6' (This connection closed normally without authentication) mariadb | 2024-08-13 17:01:59 9 [Warning] Aborted connection 9 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.9' (This connection closed normally without authentication) =================================== ======== Logs from apex-pdp ======== policy-apex-pdp | Waiting for mariadb port 3306... policy-apex-pdp | mariadb (172.17.0.3:3306) open policy-apex-pdp | Waiting for kafka port 9092... policy-apex-pdp | kafka (172.17.0.8:9092) open policy-apex-pdp | Waiting for pap port 6969... policy-apex-pdp | pap (172.17.0.9:6969) open policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' policy-apex-pdp | [2024-08-13T17:02:33.863+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] policy-apex-pdp | [2024-08-13T17:02:34.043+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-apex-pdp | allow.auto.create.topics = true policy-apex-pdp | auto.commit.interval.ms = 5000 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | auto.offset.reset = latest policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | check.crcs = true policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = consumer-3be38c39-57f3-4b03-8e86-61001329f2ae-1 policy-apex-pdp | client.rack = policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | default.api.timeout.ms = 60000 policy-apex-pdp | enable.auto.commit = true policy-apex-pdp | exclude.internal.topics = true policy-apex-pdp | fetch.max.bytes = 52428800 policy-apex-pdp | fetch.max.wait.ms = 500 policy-apex-pdp | fetch.min.bytes = 1 policy-apex-pdp | group.id = 3be38c39-57f3-4b03-8e86-61001329f2ae policy-apex-pdp | group.instance.id = null policy-apex-pdp | heartbeat.interval.ms = 3000 policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | internal.leave.group.on.close = true policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-apex-pdp | isolation.level = read_uncommitted policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | max.partition.fetch.bytes = 1048576 policy-apex-pdp | max.poll.interval.ms = 300000 policy-apex-pdp | max.poll.records = 500 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-apex-pdp | receive.buffer.bytes = 65536 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.callback.handler.class = null policy-apex-pdp | sasl.login.class = null policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-apex-pdp | sasl.login.read.timeout.ms = null policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | sasl.mechanism = GSSAPI policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | security.protocol = PLAINTEXT policy-apex-pdp | security.providers = null policy-apex-pdp | send.buffer.bytes = 131072 policy-apex-pdp | session.timeout.ms = 45000 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | ssl.cipher.suites = null policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-apex-pdp | ssl.engine.factory.class = null policy-apex-pdp | ssl.key.password = null policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | ssl.keystore.certificate.chain = null policy-apex-pdp | ssl.keystore.key = null policy-apex-pdp | ssl.keystore.location = null policy-apex-pdp | ssl.keystore.password = null policy-apex-pdp | ssl.keystore.type = JKS policy-apex-pdp | ssl.protocol = TLSv1.3 policy-apex-pdp | ssl.provider = null policy-apex-pdp | ssl.secure.random.implementation = null policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | ssl.truststore.certificates = null policy-apex-pdp | ssl.truststore.location = null policy-apex-pdp | ssl.truststore.password = null policy-apex-pdp | ssl.truststore.type = JKS policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | policy-apex-pdp | [2024-08-13T17:02:34.230+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-apex-pdp | [2024-08-13T17:02:34.230+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-apex-pdp | [2024-08-13T17:02:34.230+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1723568554228 policy-apex-pdp | [2024-08-13T17:02:34.232+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-3be38c39-57f3-4b03-8e86-61001329f2ae-1, groupId=3be38c39-57f3-4b03-8e86-61001329f2ae] Subscribed to topic(s): policy-pdp-pap policy-apex-pdp | [2024-08-13T17:02:34.245+00:00|INFO|ServiceManager|main] service manager starting policy-apex-pdp | [2024-08-13T17:02:34.246+00:00|INFO|ServiceManager|main] service manager starting topics policy-apex-pdp | [2024-08-13T17:02:34.247+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=3be38c39-57f3-4b03-8e86-61001329f2ae, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting policy-apex-pdp | [2024-08-13T17:02:34.268+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-apex-pdp | allow.auto.create.topics = true policy-apex-pdp | auto.commit.interval.ms = 5000 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | auto.offset.reset = latest policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | check.crcs = true policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = consumer-3be38c39-57f3-4b03-8e86-61001329f2ae-2 policy-apex-pdp | client.rack = policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | default.api.timeout.ms = 60000 policy-apex-pdp | enable.auto.commit = true policy-apex-pdp | exclude.internal.topics = true policy-apex-pdp | fetch.max.bytes = 52428800 policy-apex-pdp | fetch.max.wait.ms = 500 policy-apex-pdp | fetch.min.bytes = 1 policy-apex-pdp | group.id = 3be38c39-57f3-4b03-8e86-61001329f2ae policy-apex-pdp | group.instance.id = null policy-apex-pdp | heartbeat.interval.ms = 3000 policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | internal.leave.group.on.close = true policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-apex-pdp | isolation.level = read_uncommitted policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | max.partition.fetch.bytes = 1048576 policy-apex-pdp | max.poll.interval.ms = 300000 policy-apex-pdp | max.poll.records = 500 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-apex-pdp | receive.buffer.bytes = 65536 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.callback.handler.class = null policy-apex-pdp | sasl.login.class = null policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-apex-pdp | sasl.login.read.timeout.ms = null policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | sasl.mechanism = GSSAPI policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | security.protocol = PLAINTEXT policy-apex-pdp | security.providers = null policy-apex-pdp | send.buffer.bytes = 131072 policy-apex-pdp | session.timeout.ms = 45000 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | ssl.cipher.suites = null policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-apex-pdp | ssl.engine.factory.class = null policy-apex-pdp | ssl.key.password = null policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | ssl.keystore.certificate.chain = null policy-apex-pdp | ssl.keystore.key = null policy-apex-pdp | ssl.keystore.location = null policy-apex-pdp | ssl.keystore.password = null policy-apex-pdp | ssl.keystore.type = JKS policy-apex-pdp | ssl.protocol = TLSv1.3 policy-apex-pdp | ssl.provider = null policy-apex-pdp | ssl.secure.random.implementation = null policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | ssl.truststore.certificates = null policy-apex-pdp | ssl.truststore.location = null policy-apex-pdp | ssl.truststore.password = null policy-apex-pdp | ssl.truststore.type = JKS policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | policy-apex-pdp | [2024-08-13T17:02:34.277+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-apex-pdp | [2024-08-13T17:02:34.277+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-apex-pdp | [2024-08-13T17:02:34.277+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1723568554277 policy-apex-pdp | [2024-08-13T17:02:34.278+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-3be38c39-57f3-4b03-8e86-61001329f2ae-2, groupId=3be38c39-57f3-4b03-8e86-61001329f2ae] Subscribed to topic(s): policy-pdp-pap policy-apex-pdp | [2024-08-13T17:02:34.278+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=b3038da6-32b7-4792-8016-75b5907b3659, alive=false, publisher=null]]: starting policy-apex-pdp | [2024-08-13T17:02:34.292+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-apex-pdp | acks = -1 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | batch.size = 16384 policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | buffer.memory = 33554432 policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = producer-1 policy-apex-pdp | compression.type = none policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | delivery.timeout.ms = 120000 policy-apex-pdp | enable.idempotence = true policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-apex-pdp | linger.ms = 0 policy-apex-pdp | max.block.ms = 60000 policy-apex-pdp | max.in.flight.requests.per.connection = 5 policy-apex-pdp | max.request.size = 1048576 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metadata.max.idle.ms = 300000 policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partitioner.adaptive.partitioning.enable = true policy-apex-pdp | partitioner.availability.timeout.ms = 0 policy-apex-pdp | partitioner.class = null policy-apex-pdp | partitioner.ignore.keys = false policy-apex-pdp | receive.buffer.bytes = 32768 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retries = 2147483647 policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.callback.handler.class = null policy-apex-pdp | sasl.login.class = null policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-apex-pdp | sasl.login.read.timeout.ms = null policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | sasl.mechanism = GSSAPI policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | security.protocol = PLAINTEXT policy-apex-pdp | security.providers = null policy-apex-pdp | send.buffer.bytes = 131072 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | ssl.cipher.suites = null policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-apex-pdp | ssl.engine.factory.class = null policy-apex-pdp | ssl.key.password = null policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | ssl.keystore.certificate.chain = null policy-apex-pdp | ssl.keystore.key = null policy-apex-pdp | ssl.keystore.location = null policy-apex-pdp | ssl.keystore.password = null policy-apex-pdp | ssl.keystore.type = JKS policy-apex-pdp | ssl.protocol = TLSv1.3 policy-apex-pdp | ssl.provider = null policy-apex-pdp | ssl.secure.random.implementation = null policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | ssl.truststore.certificates = null policy-apex-pdp | ssl.truststore.location = null policy-apex-pdp | ssl.truststore.password = null policy-apex-pdp | ssl.truststore.type = JKS policy-apex-pdp | transaction.timeout.ms = 60000 policy-apex-pdp | transactional.id = null policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-apex-pdp | policy-apex-pdp | [2024-08-13T17:02:34.305+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-apex-pdp | [2024-08-13T17:02:34.331+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-apex-pdp | [2024-08-13T17:02:34.331+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-apex-pdp | [2024-08-13T17:02:34.331+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1723568554331 policy-apex-pdp | [2024-08-13T17:02:34.332+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=b3038da6-32b7-4792-8016-75b5907b3659, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-apex-pdp | [2024-08-13T17:02:34.332+00:00|INFO|ServiceManager|main] service manager starting set alive policy-apex-pdp | [2024-08-13T17:02:34.332+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object policy-apex-pdp | [2024-08-13T17:02:34.334+00:00|INFO|ServiceManager|main] service manager starting topic sinks policy-apex-pdp | [2024-08-13T17:02:34.334+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher policy-apex-pdp | [2024-08-13T17:02:34.336+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener policy-apex-pdp | [2024-08-13T17:02:34.336+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher policy-apex-pdp | [2024-08-13T17:02:34.336+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher policy-apex-pdp | [2024-08-13T17:02:34.337+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=3be38c39-57f3-4b03-8e86-61001329f2ae, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@60a2630a policy-apex-pdp | [2024-08-13T17:02:34.337+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=3be38c39-57f3-4b03-8e86-61001329f2ae, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted policy-apex-pdp | [2024-08-13T17:02:34.337+00:00|INFO|ServiceManager|main] service manager starting Create REST server policy-apex-pdp | [2024-08-13T17:02:34.362+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: policy-apex-pdp | [] policy-apex-pdp | [2024-08-13T17:02:34.365+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"11566d66-0afc-4afd-ab66-1030e94c489a","timestampMs":1723568554336,"name":"apex-93de6edc-9eb2-414e-a019-842a996e1c21","pdpGroup":"defaultGroup"} policy-apex-pdp | [2024-08-13T17:02:34.596+00:00|INFO|ServiceManager|main] service manager starting Rest Server policy-apex-pdp | [2024-08-13T17:02:34.597+00:00|INFO|ServiceManager|main] service manager starting policy-apex-pdp | [2024-08-13T17:02:34.597+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters policy-apex-pdp | [2024-08-13T17:02:34.598+00:00|INFO|JettyServletServer|main] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@72c927f1{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@1ac85b0c{/,null,STOPPED}, connector=RestServerParameters@63c5efee{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-apex-pdp | [2024-08-13T17:02:34.614+00:00|INFO|ServiceManager|main] service manager started policy-apex-pdp | [2024-08-13T17:02:34.614+00:00|INFO|ServiceManager|main] service manager started policy-apex-pdp | [2024-08-13T17:02:34.615+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. policy-apex-pdp | [2024-08-13T17:02:34.614+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@72c927f1{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@1ac85b0c{/,null,STOPPED}, connector=RestServerParameters@63c5efee{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-apex-pdp | [2024-08-13T17:02:34.790+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3be38c39-57f3-4b03-8e86-61001329f2ae-2, groupId=3be38c39-57f3-4b03-8e86-61001329f2ae] Cluster ID: SIMEIeI3Sp-rNGzMENX8ug policy-apex-pdp | [2024-08-13T17:02:34.790+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: SIMEIeI3Sp-rNGzMENX8ug policy-apex-pdp | [2024-08-13T17:02:34.791+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 policy-apex-pdp | [2024-08-13T17:02:34.791+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3be38c39-57f3-4b03-8e86-61001329f2ae-2, groupId=3be38c39-57f3-4b03-8e86-61001329f2ae] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-apex-pdp | [2024-08-13T17:02:34.800+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3be38c39-57f3-4b03-8e86-61001329f2ae-2, groupId=3be38c39-57f3-4b03-8e86-61001329f2ae] (Re-)joining group policy-apex-pdp | [2024-08-13T17:02:34.815+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3be38c39-57f3-4b03-8e86-61001329f2ae-2, groupId=3be38c39-57f3-4b03-8e86-61001329f2ae] Request joining group due to: need to re-join with the given member-id: consumer-3be38c39-57f3-4b03-8e86-61001329f2ae-2-d13f5bde-d737-4f99-8e8b-6161463430f1 policy-apex-pdp | [2024-08-13T17:02:34.816+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3be38c39-57f3-4b03-8e86-61001329f2ae-2, groupId=3be38c39-57f3-4b03-8e86-61001329f2ae] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-apex-pdp | [2024-08-13T17:02:34.816+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3be38c39-57f3-4b03-8e86-61001329f2ae-2, groupId=3be38c39-57f3-4b03-8e86-61001329f2ae] (Re-)joining group policy-apex-pdp | [2024-08-13T17:02:35.341+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls policy-apex-pdp | [2024-08-13T17:02:35.343+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls policy-apex-pdp | [2024-08-13T17:02:37.820+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3be38c39-57f3-4b03-8e86-61001329f2ae-2, groupId=3be38c39-57f3-4b03-8e86-61001329f2ae] Successfully joined group with generation Generation{generationId=1, memberId='consumer-3be38c39-57f3-4b03-8e86-61001329f2ae-2-d13f5bde-d737-4f99-8e8b-6161463430f1', protocol='range'} policy-apex-pdp | [2024-08-13T17:02:37.827+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3be38c39-57f3-4b03-8e86-61001329f2ae-2, groupId=3be38c39-57f3-4b03-8e86-61001329f2ae] Finished assignment for group at generation 1: {consumer-3be38c39-57f3-4b03-8e86-61001329f2ae-2-d13f5bde-d737-4f99-8e8b-6161463430f1=Assignment(partitions=[policy-pdp-pap-0])} policy-apex-pdp | [2024-08-13T17:02:37.835+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3be38c39-57f3-4b03-8e86-61001329f2ae-2, groupId=3be38c39-57f3-4b03-8e86-61001329f2ae] Successfully synced group in generation Generation{generationId=1, memberId='consumer-3be38c39-57f3-4b03-8e86-61001329f2ae-2-d13f5bde-d737-4f99-8e8b-6161463430f1', protocol='range'} policy-apex-pdp | [2024-08-13T17:02:37.836+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3be38c39-57f3-4b03-8e86-61001329f2ae-2, groupId=3be38c39-57f3-4b03-8e86-61001329f2ae] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-apex-pdp | [2024-08-13T17:02:37.841+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3be38c39-57f3-4b03-8e86-61001329f2ae-2, groupId=3be38c39-57f3-4b03-8e86-61001329f2ae] Adding newly assigned partitions: policy-pdp-pap-0 policy-apex-pdp | [2024-08-13T17:02:37.851+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3be38c39-57f3-4b03-8e86-61001329f2ae-2, groupId=3be38c39-57f3-4b03-8e86-61001329f2ae] Found no committed offset for partition policy-pdp-pap-0 policy-apex-pdp | [2024-08-13T17:02:37.868+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3be38c39-57f3-4b03-8e86-61001329f2ae-2, groupId=3be38c39-57f3-4b03-8e86-61001329f2ae] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-apex-pdp | [2024-08-13T17:02:54.336+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"f56744a6-c79e-45d1-9261-1b79608d273d","timestampMs":1723568574336,"name":"apex-93de6edc-9eb2-414e-a019-842a996e1c21","pdpGroup":"defaultGroup"} policy-apex-pdp | [2024-08-13T17:02:54.361+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"f56744a6-c79e-45d1-9261-1b79608d273d","timestampMs":1723568574336,"name":"apex-93de6edc-9eb2-414e-a019-842a996e1c21","pdpGroup":"defaultGroup"} policy-apex-pdp | [2024-08-13T17:02:54.364+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2024-08-13T17:02:54.541+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"source":"pap-f8d5c3cf-5399-4bf8-9cca-c4b375875f74","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"94134ac7-f438-4299-a357-6d4733bc0237","timestampMs":1723568574438,"name":"apex-93de6edc-9eb2-414e-a019-842a996e1c21","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-08-13T17:02:54.549+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher policy-apex-pdp | [2024-08-13T17:02:54.549+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"72241bc4-797e-4f34-8e42-8d916ac33b53","timestampMs":1723568574549,"name":"apex-93de6edc-9eb2-414e-a019-842a996e1c21","pdpGroup":"defaultGroup"} policy-apex-pdp | [2024-08-13T17:02:54.551+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"94134ac7-f438-4299-a357-6d4733bc0237","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"a6bebc18-1074-4c55-8c9b-c1c8e9d7314a","timestampMs":1723568574551,"name":"apex-93de6edc-9eb2-414e-a019-842a996e1c21","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-08-13T17:02:54.560+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"72241bc4-797e-4f34-8e42-8d916ac33b53","timestampMs":1723568574549,"name":"apex-93de6edc-9eb2-414e-a019-842a996e1c21","pdpGroup":"defaultGroup"} policy-apex-pdp | [2024-08-13T17:02:54.560+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2024-08-13T17:02:54.560+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"94134ac7-f438-4299-a357-6d4733bc0237","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"a6bebc18-1074-4c55-8c9b-c1c8e9d7314a","timestampMs":1723568574551,"name":"apex-93de6edc-9eb2-414e-a019-842a996e1c21","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-08-13T17:02:54.560+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2024-08-13T17:02:54.599+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"source":"pap-f8d5c3cf-5399-4bf8-9cca-c4b375875f74","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"8d28b914-0ef0-4ad6-96b5-835801266569","timestampMs":1723568574439,"name":"apex-93de6edc-9eb2-414e-a019-842a996e1c21","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-08-13T17:02:54.602+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"8d28b914-0ef0-4ad6-96b5-835801266569","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"7b022d67-c7cd-493b-8723-523293c4e0d4","timestampMs":1723568574602,"name":"apex-93de6edc-9eb2-414e-a019-842a996e1c21","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-08-13T17:02:54.611+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"8d28b914-0ef0-4ad6-96b5-835801266569","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"7b022d67-c7cd-493b-8723-523293c4e0d4","timestampMs":1723568574602,"name":"apex-93de6edc-9eb2-414e-a019-842a996e1c21","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-08-13T17:02:54.612+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2024-08-13T17:02:54.810+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"source":"pap-f8d5c3cf-5399-4bf8-9cca-c4b375875f74","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"89e0643d-8cc1-42a9-a0be-206cd431b61c","timestampMs":1723568574635,"name":"apex-93de6edc-9eb2-414e-a019-842a996e1c21","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-08-13T17:02:54.812+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"89e0643d-8cc1-42a9-a0be-206cd431b61c","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"f4901ce9-22c6-40ec-b9d8-6310524b71cf","timestampMs":1723568574812,"name":"apex-93de6edc-9eb2-414e-a019-842a996e1c21","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-08-13T17:02:54.820+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"89e0643d-8cc1-42a9-a0be-206cd431b61c","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"f4901ce9-22c6-40ec-b9d8-6310524b71cf","timestampMs":1723568574812,"name":"apex-93de6edc-9eb2-414e-a019-842a996e1c21","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-08-13T17:02:54.820+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2024-08-13T17:02:56.171+00:00|INFO|RequestLog|qtp739264372-32] 172.17.0.4 - policyadmin [13/Aug/2024:17:02:56 +0000] "GET /metrics HTTP/1.1" 200 10649 "-" "Prometheus/2.53.2" policy-apex-pdp | [2024-08-13T17:03:56.081+00:00|INFO|RequestLog|qtp739264372-28] 172.17.0.4 - policyadmin [13/Aug/2024:17:03:56 +0000] "GET /metrics HTTP/1.1" 200 10650 "-" "Prometheus/2.53.2" =================================== ======== Logs from api ======== policy-api | Waiting for mariadb port 3306... policy-api | mariadb (172.17.0.3:3306) open policy-api | Waiting for policy-db-migrator port 6824... policy-api | policy-db-migrator (172.17.0.6:6824) open policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml policy-api | policy-api | . ____ _ __ _ _ policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / policy-api | =========|_|==============|___/=/_/_/_/ policy-api | :: Spring Boot :: (v3.1.10) policy-api | policy-api | [2024-08-13T17:02:08.944+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final policy-api | [2024-08-13T17:02:09.060+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.11 with PID 23 (/app/api.jar started by policy in /opt/app/policy/api/bin) policy-api | [2024-08-13T17:02:09.062+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" policy-api | [2024-08-13T17:02:11.176+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-api | [2024-08-13T17:02:11.264+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 78 ms. Found 6 JPA repository interfaces. policy-api | [2024-08-13T17:02:11.792+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler policy-api | [2024-08-13T17:02:11.793+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler policy-api | [2024-08-13T17:02:12.551+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) policy-api | [2024-08-13T17:02:12.568+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-api | [2024-08-13T17:02:12.570+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-api | [2024-08-13T17:02:12.570+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.19] policy-api | [2024-08-13T17:02:12.677+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext policy-api | [2024-08-13T17:02:12.677+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3538 ms policy-api | [2024-08-13T17:02:13.156+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-api | [2024-08-13T17:02:13.245+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.2.Final policy-api | [2024-08-13T17:02:13.292+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-api | [2024-08-13T17:02:13.777+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-api | [2024-08-13T17:02:13.823+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-api | [2024-08-13T17:02:13.931+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@26844abb policy-api | [2024-08-13T17:02:13.934+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-api | [2024-08-13T17:02:16.283+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-api | [2024-08-13T17:02:16.286+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-api | [2024-08-13T17:02:17.354+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml policy-api | [2024-08-13T17:02:18.253+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] policy-api | [2024-08-13T17:02:19.357+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-api | [2024-08-13T17:02:19.619+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@7f930614, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@6ef0a044, org.springframework.security.web.context.SecurityContextHolderFilter@231e5af, org.springframework.security.web.header.HeaderWriterFilter@4c48ccc4, org.springframework.security.web.authentication.logout.LogoutFilter@73d7b6b0, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@25b2d26a, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@56ed024b, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@5a26a14, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@73e505d5, org.springframework.security.web.access.ExceptionTranslationFilter@1d93bd2a, org.springframework.security.web.access.intercept.AuthorizationFilter@43cbc87f] policy-api | [2024-08-13T17:02:20.543+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' policy-api | [2024-08-13T17:02:20.648+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-api | [2024-08-13T17:02:20.679+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' policy-api | [2024-08-13T17:02:20.697+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 12.481 seconds (process running for 13.189) policy-api | [2024-08-13T17:02:39.940+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-api | [2024-08-13T17:02:39.940+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' policy-api | [2024-08-13T17:02:39.941+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 1 ms policy-api | [2024-08-13T17:03:07.824+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-3] ***** OrderedServiceImpl implementers: policy-api | [] =================================== ======== Logs from csit-tests ======== policy-csit | Invoking the robot tests from: pap-test.robot pap-slas.robot policy-csit | Run Robot test policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates policy-csit | -v POLICY_API_IP:policy-api:6969 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 policy-csit | -v APEX_IP:policy-apex-pdp:6969 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 policy-csit | -v KAFKA_IP:kafka:9092 policy-csit | -v PROMETHEUS_IP:prometheus:9090 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 policy-csit | -v TEMP_FOLDER:/tmp/distribution policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 policy-csit | -v CLAMP_K8S_TEST: policy-csit | Starting Robot test suites ... policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas.Pap-Test policy-csit | ============================================================================== policy-csit | LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | LoadNodeTemplates :: Create node templates in database using speci... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Healthcheck :: Verify policy pap health check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Consolidated Healthcheck :: Verify policy consolidated health check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Metrics :: Verify policy pap is exporting prometheus metrics | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | DeployPdpGroups :: Deploy policies in PdpGroups | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Pap-Test & Pap-Slas.Pap-Test | PASS | policy-csit | 22 tests, 22 passed, 0 failed policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas.Pap-Slas policy-csit | ============================================================================== policy-csit | WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Pap-Test & Pap-Slas.Pap-Slas | PASS | policy-csit | 8 tests, 8 passed, 0 failed policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas | PASS | policy-csit | 30 tests, 30 passed, 0 failed policy-csit | ============================================================================== policy-csit | Output: /tmp/results/output.xml policy-csit | Log: /tmp/results/log.html policy-csit | Report: /tmp/results/report.html policy-csit | RESULT: 0 =================================== ======== Logs from policy-db-migrator ======== policy-db-migrator | Waiting for mariadb port 3306... policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused policy-db-migrator | Connection to mariadb (172.17.0.3) 3306 port [tcp/mysql] succeeded! policy-db-migrator | 321 blocks policy-db-migrator | Preparing upgrade release version: 0800 policy-db-migrator | Preparing upgrade release version: 0900 policy-db-migrator | Preparing upgrade release version: 1000 policy-db-migrator | Preparing upgrade release version: 1100 policy-db-migrator | Preparing upgrade release version: 1200 policy-db-migrator | Preparing upgrade release version: 1300 policy-db-migrator | Done policy-db-migrator | name version policy-db-migrator | policyadmin 0 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 policy-db-migrator | upgrade: 0 -> 1300 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0450-pdpgroup.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0460-pdppolicystatus.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0470-pdp.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0480-pdpstatistics.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0500-pdpsubgroup.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0570-toscadatatype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0580-toscadatatypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0600-toscanodetemplate.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0610-toscanodetemplates.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0630-toscanodetype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0640-toscanodetypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0660-toscaparameter.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0670-toscapolicies.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0690-toscapolicy.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0700-toscapolicytype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0710-toscapolicytypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0730-toscaproperty.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0770-toscarequirement.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0780-toscarequirements.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0820-toscatrigger.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0100-pdp.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0130-pdpstatistics.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql policy-db-migrator | -------------- policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0150-pdpstatistics.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql policy-db-migrator | -------------- policy-db-migrator | UPDATE jpapdpstatistics_enginestats a policy-db-migrator | JOIN pdpstatistics b policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp policy-db-migrator | SET a.id = b.id policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0210-sequence.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0220-sequence.sql policy-db-migrator | -------------- policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0120-toscatrigger.sql policy-db-migrator | -------------- policy-db-migrator | DROP TABLE IF EXISTS toscatrigger policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0140-toscaparameter.sql policy-db-migrator | -------------- policy-db-migrator | DROP TABLE IF EXISTS toscaparameter policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0150-toscaproperty.sql policy-db-migrator | -------------- policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | DROP TABLE IF EXISTS toscaproperty policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0100-upgrade.sql policy-db-migrator | -------------- policy-db-migrator | select 'upgrade to 1100 completed' as msg policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | msg policy-db-migrator | upgrade to 1100 completed policy-db-migrator | policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-db-migrator | -------------- policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0120-audit_sequence.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0130-statistics_sequence.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | TRUNCATE TABLE sequence policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0100-pdpstatistics.sql policy-db-migrator | -------------- policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | DROP TABLE pdpstatistics policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-db-migrator | -------------- policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0120-statistics_sequence.sql policy-db-migrator | -------------- policy-db-migrator | DROP TABLE statistics_sequence policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policyadmin: OK: upgrade (1300) policy-db-migrator | name version policy-db-migrator | policyadmin 1300 policy-db-migrator | ID script operation from_version to_version tag success atTime policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:00 policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:00 policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:00 policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:00 policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:00 policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:00 policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:00 policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:00 policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:00 policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:00 policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:00 policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:00 policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:00 policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:00 policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:00 policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:00 policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:00 policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:00 policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:00 policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:00 policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:01 policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:01 policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:01 policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:01 policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:01 policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:01 policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:01 policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:01 policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:01 policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:01 policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:01 policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:01 policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:01 policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:01 policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:01 policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:01 policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:01 policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:01 policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:01 policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:01 policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:01 policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:01 policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:01 policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:02 policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:02 policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:02 policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:02 policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:02 policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:02 policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:02 policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:02 policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:02 policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:02 policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:02 policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:02 policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:02 policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:02 policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:02 policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:02 policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:02 policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:02 policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:02 policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:02 policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:02 policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:03 policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:03 policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:03 policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:03 policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:03 policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:03 policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:03 policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:03 policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:03 policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:03 policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:03 policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:03 policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:03 policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:03 policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:03 policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:03 policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:04 policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:04 policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:04 policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:04 policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:04 policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:04 policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:04 policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:04 policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:04 policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:04 policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:04 policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:04 policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:04 policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:04 policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:04 policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 1308241702000800u 1 2024-08-13 17:02:05 policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 1308241702000900u 1 2024-08-13 17:02:05 policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 1308241702000900u 1 2024-08-13 17:02:05 policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 1308241702000900u 1 2024-08-13 17:02:05 policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 1308241702000900u 1 2024-08-13 17:02:05 policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 1308241702000900u 1 2024-08-13 17:02:05 policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 1308241702000900u 1 2024-08-13 17:02:05 policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 1308241702000900u 1 2024-08-13 17:02:05 policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 1308241702000900u 1 2024-08-13 17:02:05 policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 1308241702000900u 1 2024-08-13 17:02:05 policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 1308241702000900u 1 2024-08-13 17:02:05 policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 1308241702000900u 1 2024-08-13 17:02:05 policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 1308241702000900u 1 2024-08-13 17:02:05 policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 1308241702000900u 1 2024-08-13 17:02:05 policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 1308241702001000u 1 2024-08-13 17:02:05 policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 1308241702001000u 1 2024-08-13 17:02:05 policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 1308241702001000u 1 2024-08-13 17:02:05 policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 1308241702001000u 1 2024-08-13 17:02:05 policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 1308241702001000u 1 2024-08-13 17:02:05 policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 1308241702001000u 1 2024-08-13 17:02:06 policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 1308241702001000u 1 2024-08-13 17:02:06 policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 1308241702001000u 1 2024-08-13 17:02:06 policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 1308241702001000u 1 2024-08-13 17:02:06 policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 1308241702001100u 1 2024-08-13 17:02:06 policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 1308241702001200u 1 2024-08-13 17:02:06 policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 1308241702001200u 1 2024-08-13 17:02:06 policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 1308241702001200u 1 2024-08-13 17:02:06 policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 1308241702001200u 1 2024-08-13 17:02:06 policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 1308241702001300u 1 2024-08-13 17:02:06 policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 1308241702001300u 1 2024-08-13 17:02:06 policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 1308241702001300u 1 2024-08-13 17:02:06 policy-db-migrator | policyadmin: OK @ 1300 =================================== ======== Logs from pap ======== policy-pap | Waiting for mariadb port 3306... policy-pap | mariadb (172.17.0.3:3306) open policy-pap | Waiting for kafka port 9092... policy-pap | kafka (172.17.0.8:9092) open policy-pap | Waiting for api port 6969... policy-pap | api (172.17.0.7:6969) open policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json policy-pap | policy-pap | . ____ _ __ _ _ policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / policy-pap | =========|_|==============|___/=/_/_/_/ policy-pap | :: Spring Boot :: (v3.1.10) policy-pap | policy-pap | [2024-08-13T17:02:22.912+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final policy-pap | [2024-08-13T17:02:22.973+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.11 with PID 35 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) policy-pap | [2024-08-13T17:02:22.975+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" policy-pap | [2024-08-13T17:02:25.142+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-pap | [2024-08-13T17:02:25.245+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 94 ms. Found 7 JPA repository interfaces. policy-pap | [2024-08-13T17:02:25.748+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler policy-pap | [2024-08-13T17:02:25.749+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler policy-pap | [2024-08-13T17:02:26.407+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) policy-pap | [2024-08-13T17:02:26.419+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-pap | [2024-08-13T17:02:26.421+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-pap | [2024-08-13T17:02:26.421+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.19] policy-pap | [2024-08-13T17:02:26.519+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext policy-pap | [2024-08-13T17:02:26.519+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3454 ms policy-pap | [2024-08-13T17:02:26.940+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-pap | [2024-08-13T17:02:27.014+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 5.6.15.Final policy-pap | [2024-08-13T17:02:27.359+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-pap | [2024-08-13T17:02:27.471+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@14982a82 policy-pap | [2024-08-13T17:02:27.474+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-pap | [2024-08-13T17:02:27.508+00:00|INFO|Dialect|main] HHH000400: Using dialect: org.hibernate.dialect.MariaDB106Dialect policy-pap | [2024-08-13T17:02:29.191+00:00|INFO|JtaPlatformInitiator|main] HHH000490: Using JtaPlatform implementation: [org.hibernate.engine.transaction.jta.platform.internal.NoJtaPlatform] policy-pap | [2024-08-13T17:02:29.201+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-pap | [2024-08-13T17:02:29.744+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository policy-pap | [2024-08-13T17:02:30.222+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository policy-pap | [2024-08-13T17:02:30.352+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository policy-pap | [2024-08-13T17:02:30.643+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-c3121714-2fa5-463e-adc9-74ade2a795c3-1 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = c3121714-2fa5-463e-adc9-74ade2a795c3 policy-pap | group.instance.id = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2024-08-13T17:02:30.817+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-pap | [2024-08-13T17:02:30.818+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-pap | [2024-08-13T17:02:30.818+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1723568550815 policy-pap | [2024-08-13T17:02:30.821+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-c3121714-2fa5-463e-adc9-74ade2a795c3-1, groupId=c3121714-2fa5-463e-adc9-74ade2a795c3] Subscribed to topic(s): policy-pdp-pap policy-pap | [2024-08-13T17:02:30.822+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-policy-pap-2 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = policy-pap policy-pap | group.instance.id = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2024-08-13T17:02:30.834+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-pap | [2024-08-13T17:02:30.834+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-pap | [2024-08-13T17:02:30.834+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1723568550834 policy-pap | [2024-08-13T17:02:30.835+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-pap | [2024-08-13T17:02:31.243+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json policy-pap | [2024-08-13T17:02:31.403+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-pap | [2024-08-13T17:02:31.636+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@6c851821, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@4c0930c4, org.springframework.security.web.context.SecurityContextHolderFilter@70aa03c0, org.springframework.security.web.header.HeaderWriterFilter@5ced0537, org.springframework.security.web.authentication.logout.LogoutFilter@5e34a84b, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@5308e79d, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@2435c6ae, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@77db231c, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@75c0cd39, org.springframework.security.web.access.ExceptionTranslationFilter@23d23d98, org.springframework.security.web.access.intercept.AuthorizationFilter@35744f8] policy-pap | [2024-08-13T17:02:32.602+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' policy-pap | [2024-08-13T17:02:32.708+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-pap | [2024-08-13T17:02:32.729+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' policy-pap | [2024-08-13T17:02:32.747+00:00|INFO|ServiceManager|main] Policy PAP starting policy-pap | [2024-08-13T17:02:32.747+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry policy-pap | [2024-08-13T17:02:32.748+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters policy-pap | [2024-08-13T17:02:32.753+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener policy-pap | [2024-08-13T17:02:32.753+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher policy-pap | [2024-08-13T17:02:32.754+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher policy-pap | [2024-08-13T17:02:32.754+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher policy-pap | [2024-08-13T17:02:32.761+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=c3121714-2fa5-463e-adc9-74ade2a795c3, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@1b96d447 policy-pap | [2024-08-13T17:02:32.782+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=c3121714-2fa5-463e-adc9-74ade2a795c3, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2024-08-13T17:02:32.783+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-c3121714-2fa5-463e-adc9-74ade2a795c3-3 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = c3121714-2fa5-463e-adc9-74ade2a795c3 policy-pap | group.instance.id = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2024-08-13T17:02:32.790+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-pap | [2024-08-13T17:02:32.790+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-pap | [2024-08-13T17:02:32.790+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1723568552790 policy-pap | [2024-08-13T17:02:32.791+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-c3121714-2fa5-463e-adc9-74ade2a795c3-3, groupId=c3121714-2fa5-463e-adc9-74ade2a795c3] Subscribed to topic(s): policy-pdp-pap policy-pap | [2024-08-13T17:02:32.792+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher policy-pap | [2024-08-13T17:02:32.792+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=194d5da2-b24e-4e7a-86c4-850a31e2153a, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@591ba330 policy-pap | [2024-08-13T17:02:32.793+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=194d5da2-b24e-4e7a-86c4-850a31e2153a, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2024-08-13T17:02:32.793+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-policy-pap-4 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = policy-pap policy-pap | group.instance.id = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2024-08-13T17:02:32.821+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-pap | [2024-08-13T17:02:32.822+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-pap | [2024-08-13T17:02:32.822+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1723568552821 policy-pap | [2024-08-13T17:02:32.822+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-pap | [2024-08-13T17:02:32.823+00:00|INFO|ServiceManager|main] Policy PAP starting topics policy-pap | [2024-08-13T17:02:32.823+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=194d5da2-b24e-4e7a-86c4-850a31e2153a, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2024-08-13T17:02:32.823+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=c3121714-2fa5-463e-adc9-74ade2a795c3, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2024-08-13T17:02:32.823+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=7b095cc6-1f1d-4a85-b337-c24df8d3c82f, alive=false, publisher=null]]: starting policy-pap | [2024-08-13T17:02:32.851+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-pap | acks = -1 policy-pap | auto.include.jmx.reporter = true policy-pap | batch.size = 16384 policy-pap | bootstrap.servers = [kafka:9092] policy-pap | buffer.memory = 33554432 policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = producer-1 policy-pap | compression.type = none policy-pap | connections.max.idle.ms = 540000 policy-pap | delivery.timeout.ms = 120000 policy-pap | enable.idempotence = true policy-pap | interceptor.classes = [] policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | linger.ms = 0 policy-pap | max.block.ms = 60000 policy-pap | max.in.flight.requests.per.connection = 5 policy-pap | max.request.size = 1048576 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.max.idle.ms = 300000 policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partitioner.adaptive.partitioning.enable = true policy-pap | partitioner.availability.timeout.ms = 0 policy-pap | partitioner.class = null policy-pap | partitioner.ignore.keys = false policy-pap | receive.buffer.bytes = 32768 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retries = 2147483647 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | transaction.timeout.ms = 60000 policy-pap | transactional.id = null policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | policy-pap | [2024-08-13T17:02:32.869+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-pap | [2024-08-13T17:02:32.895+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-pap | [2024-08-13T17:02:32.895+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-pap | [2024-08-13T17:02:32.895+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1723568552894 policy-pap | [2024-08-13T17:02:32.895+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=7b095cc6-1f1d-4a85-b337-c24df8d3c82f, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-pap | [2024-08-13T17:02:32.895+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=8d7124bf-a748-4236-bb73-d5789ecd2403, alive=false, publisher=null]]: starting policy-pap | [2024-08-13T17:02:32.896+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-pap | acks = -1 policy-pap | auto.include.jmx.reporter = true policy-pap | batch.size = 16384 policy-pap | bootstrap.servers = [kafka:9092] policy-pap | buffer.memory = 33554432 policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = producer-2 policy-pap | compression.type = none policy-pap | connections.max.idle.ms = 540000 policy-pap | delivery.timeout.ms = 120000 policy-pap | enable.idempotence = true policy-pap | interceptor.classes = [] policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | linger.ms = 0 policy-pap | max.block.ms = 60000 policy-pap | max.in.flight.requests.per.connection = 5 policy-pap | max.request.size = 1048576 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.max.idle.ms = 300000 policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partitioner.adaptive.partitioning.enable = true policy-pap | partitioner.availability.timeout.ms = 0 policy-pap | partitioner.class = null policy-pap | partitioner.ignore.keys = false policy-pap | receive.buffer.bytes = 32768 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retries = 2147483647 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | transaction.timeout.ms = 60000 policy-pap | transactional.id = null policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | policy-pap | [2024-08-13T17:02:32.896+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. policy-pap | [2024-08-13T17:02:32.899+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-pap | [2024-08-13T17:02:32.899+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-pap | [2024-08-13T17:02:32.899+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1723568552899 policy-pap | [2024-08-13T17:02:32.901+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=8d7124bf-a748-4236-bb73-d5789ecd2403, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-pap | [2024-08-13T17:02:32.901+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator policy-pap | [2024-08-13T17:02:32.901+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher policy-pap | [2024-08-13T17:02:32.905+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher policy-pap | [2024-08-13T17:02:32.905+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers policy-pap | [2024-08-13T17:02:32.906+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers policy-pap | [2024-08-13T17:02:32.906+00:00|INFO|TimerManager|Thread-9] timer manager update started policy-pap | [2024-08-13T17:02:32.908+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock policy-pap | [2024-08-13T17:02:32.908+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests policy-pap | [2024-08-13T17:02:32.911+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer policy-pap | [2024-08-13T17:02:32.911+00:00|INFO|TimerManager|Thread-10] timer manager state-change started policy-pap | [2024-08-13T17:02:32.912+00:00|INFO|ServiceManager|main] Policy PAP started policy-pap | [2024-08-13T17:02:32.914+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 10.802 seconds (process running for 11.426) policy-pap | [2024-08-13T17:02:33.364+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: SIMEIeI3Sp-rNGzMENX8ug policy-pap | [2024-08-13T17:02:33.364+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-pap | [2024-08-13T17:02:33.365+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: SIMEIeI3Sp-rNGzMENX8ug policy-pap | [2024-08-13T17:02:33.365+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: SIMEIeI3Sp-rNGzMENX8ug policy-pap | [2024-08-13T17:02:33.431+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c3121714-2fa5-463e-adc9-74ade2a795c3-3, groupId=c3121714-2fa5-463e-adc9-74ade2a795c3] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-08-13T17:02:33.432+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c3121714-2fa5-463e-adc9-74ade2a795c3-3, groupId=c3121714-2fa5-463e-adc9-74ade2a795c3] Cluster ID: SIMEIeI3Sp-rNGzMENX8ug policy-pap | [2024-08-13T17:02:33.476+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-08-13T17:02:33.478+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 0 with epoch 0 policy-pap | [2024-08-13T17:02:33.483+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 1 with epoch 0 policy-pap | [2024-08-13T17:02:33.550+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c3121714-2fa5-463e-adc9-74ade2a795c3-3, groupId=c3121714-2fa5-463e-adc9-74ade2a795c3] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-08-13T17:02:33.587+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-08-13T17:02:33.671+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c3121714-2fa5-463e-adc9-74ade2a795c3-3, groupId=c3121714-2fa5-463e-adc9-74ade2a795c3] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-08-13T17:02:33.695+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-08-13T17:02:33.786+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c3121714-2fa5-463e-adc9-74ade2a795c3-3, groupId=c3121714-2fa5-463e-adc9-74ade2a795c3] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-08-13T17:02:34.515+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c3121714-2fa5-463e-adc9-74ade2a795c3-3, groupId=c3121714-2fa5-463e-adc9-74ade2a795c3] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-pap | [2024-08-13T17:02:34.524+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c3121714-2fa5-463e-adc9-74ade2a795c3-3, groupId=c3121714-2fa5-463e-adc9-74ade2a795c3] (Re-)joining group policy-pap | [2024-08-13T17:02:34.533+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-pap | [2024-08-13T17:02:34.537+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-pap | [2024-08-13T17:02:34.559+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-cf0bd346-c68a-4b8f-bcf8-c3c23ba8c621 policy-pap | [2024-08-13T17:02:34.559+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c3121714-2fa5-463e-adc9-74ade2a795c3-3, groupId=c3121714-2fa5-463e-adc9-74ade2a795c3] Request joining group due to: need to re-join with the given member-id: consumer-c3121714-2fa5-463e-adc9-74ade2a795c3-3-0fc44030-f254-4048-b25e-e981afcae472 policy-pap | [2024-08-13T17:02:34.560+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-pap | [2024-08-13T17:02:34.560+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c3121714-2fa5-463e-adc9-74ade2a795c3-3, groupId=c3121714-2fa5-463e-adc9-74ade2a795c3] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-pap | [2024-08-13T17:02:34.560+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-pap | [2024-08-13T17:02:34.560+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c3121714-2fa5-463e-adc9-74ade2a795c3-3, groupId=c3121714-2fa5-463e-adc9-74ade2a795c3] (Re-)joining group policy-pap | [2024-08-13T17:02:37.576+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c3121714-2fa5-463e-adc9-74ade2a795c3-3, groupId=c3121714-2fa5-463e-adc9-74ade2a795c3] Successfully joined group with generation Generation{generationId=1, memberId='consumer-c3121714-2fa5-463e-adc9-74ade2a795c3-3-0fc44030-f254-4048-b25e-e981afcae472', protocol='range'} policy-pap | [2024-08-13T17:02:37.578+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-cf0bd346-c68a-4b8f-bcf8-c3c23ba8c621', protocol='range'} policy-pap | [2024-08-13T17:02:37.589+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c3121714-2fa5-463e-adc9-74ade2a795c3-3, groupId=c3121714-2fa5-463e-adc9-74ade2a795c3] Finished assignment for group at generation 1: {consumer-c3121714-2fa5-463e-adc9-74ade2a795c3-3-0fc44030-f254-4048-b25e-e981afcae472=Assignment(partitions=[policy-pdp-pap-0])} policy-pap | [2024-08-13T17:02:37.589+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-cf0bd346-c68a-4b8f-bcf8-c3c23ba8c621=Assignment(partitions=[policy-pdp-pap-0])} policy-pap | [2024-08-13T17:02:37.630+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-cf0bd346-c68a-4b8f-bcf8-c3c23ba8c621', protocol='range'} policy-pap | [2024-08-13T17:02:37.632+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-pap | [2024-08-13T17:02:37.630+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c3121714-2fa5-463e-adc9-74ade2a795c3-3, groupId=c3121714-2fa5-463e-adc9-74ade2a795c3] Successfully synced group in generation Generation{generationId=1, memberId='consumer-c3121714-2fa5-463e-adc9-74ade2a795c3-3-0fc44030-f254-4048-b25e-e981afcae472', protocol='range'} policy-pap | [2024-08-13T17:02:37.635+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c3121714-2fa5-463e-adc9-74ade2a795c3-3, groupId=c3121714-2fa5-463e-adc9-74ade2a795c3] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-pap | [2024-08-13T17:02:37.639+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c3121714-2fa5-463e-adc9-74ade2a795c3-3, groupId=c3121714-2fa5-463e-adc9-74ade2a795c3] Adding newly assigned partitions: policy-pdp-pap-0 policy-pap | [2024-08-13T17:02:37.639+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 policy-pap | [2024-08-13T17:02:37.664+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c3121714-2fa5-463e-adc9-74ade2a795c3-3, groupId=c3121714-2fa5-463e-adc9-74ade2a795c3] Found no committed offset for partition policy-pdp-pap-0 policy-pap | [2024-08-13T17:02:37.667+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 policy-pap | [2024-08-13T17:02:37.691+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | [2024-08-13T17:02:37.693+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c3121714-2fa5-463e-adc9-74ade2a795c3-3, groupId=c3121714-2fa5-463e-adc9-74ade2a795c3] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | [2024-08-13T17:02:41.594+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-3] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-pap | [2024-08-13T17:02:41.594+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Initializing Servlet 'dispatcherServlet' policy-pap | [2024-08-13T17:02:41.597+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Completed initialization in 3 ms policy-pap | [2024-08-13T17:02:54.373+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-pdp-pap] ***** OrderedServiceImpl implementers: policy-pap | [] policy-pap | [2024-08-13T17:02:54.374+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"f56744a6-c79e-45d1-9261-1b79608d273d","timestampMs":1723568574336,"name":"apex-93de6edc-9eb2-414e-a019-842a996e1c21","pdpGroup":"defaultGroup"} policy-pap | [2024-08-13T17:02:54.374+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"f56744a6-c79e-45d1-9261-1b79608d273d","timestampMs":1723568574336,"name":"apex-93de6edc-9eb2-414e-a019-842a996e1c21","pdpGroup":"defaultGroup"} policy-pap | [2024-08-13T17:02:54.384+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-pap | [2024-08-13T17:02:54.455+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-93de6edc-9eb2-414e-a019-842a996e1c21 PdpUpdate starting policy-pap | [2024-08-13T17:02:54.455+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-93de6edc-9eb2-414e-a019-842a996e1c21 PdpUpdate starting listener policy-pap | [2024-08-13T17:02:54.456+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-93de6edc-9eb2-414e-a019-842a996e1c21 PdpUpdate starting timer policy-pap | [2024-08-13T17:02:54.456+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=94134ac7-f438-4299-a357-6d4733bc0237, expireMs=1723568604456] policy-pap | [2024-08-13T17:02:54.458+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-93de6edc-9eb2-414e-a019-842a996e1c21 PdpUpdate starting enqueue policy-pap | [2024-08-13T17:02:54.458+00:00|INFO|TimerManager|Thread-9] update timer waiting 29998ms Timer [name=94134ac7-f438-4299-a357-6d4733bc0237, expireMs=1723568604456] policy-pap | [2024-08-13T17:02:54.459+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-93de6edc-9eb2-414e-a019-842a996e1c21 PdpUpdate started policy-pap | [2024-08-13T17:02:54.464+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-f8d5c3cf-5399-4bf8-9cca-c4b375875f74","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"94134ac7-f438-4299-a357-6d4733bc0237","timestampMs":1723568574438,"name":"apex-93de6edc-9eb2-414e-a019-842a996e1c21","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-08-13T17:02:54.521+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-f8d5c3cf-5399-4bf8-9cca-c4b375875f74","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"94134ac7-f438-4299-a357-6d4733bc0237","timestampMs":1723568574438,"name":"apex-93de6edc-9eb2-414e-a019-842a996e1c21","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-08-13T17:02:54.522+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2024-08-13T17:02:54.539+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-f8d5c3cf-5399-4bf8-9cca-c4b375875f74","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"94134ac7-f438-4299-a357-6d4733bc0237","timestampMs":1723568574438,"name":"apex-93de6edc-9eb2-414e-a019-842a996e1c21","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-08-13T17:02:54.539+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2024-08-13T17:02:54.557+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"72241bc4-797e-4f34-8e42-8d916ac33b53","timestampMs":1723568574549,"name":"apex-93de6edc-9eb2-414e-a019-842a996e1c21","pdpGroup":"defaultGroup"} policy-pap | [2024-08-13T17:02:54.559+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-pap | [2024-08-13T17:02:54.563+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"94134ac7-f438-4299-a357-6d4733bc0237","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"a6bebc18-1074-4c55-8c9b-c1c8e9d7314a","timestampMs":1723568574551,"name":"apex-93de6edc-9eb2-414e-a019-842a996e1c21","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-08-13T17:02:54.563+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"72241bc4-797e-4f34-8e42-8d916ac33b53","timestampMs":1723568574549,"name":"apex-93de6edc-9eb2-414e-a019-842a996e1c21","pdpGroup":"defaultGroup"} policy-pap | [2024-08-13T17:02:54.564+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-93de6edc-9eb2-414e-a019-842a996e1c21 PdpUpdate stopping policy-pap | [2024-08-13T17:02:54.564+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-93de6edc-9eb2-414e-a019-842a996e1c21 PdpUpdate stopping enqueue policy-pap | [2024-08-13T17:02:54.564+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-93de6edc-9eb2-414e-a019-842a996e1c21 PdpUpdate stopping timer policy-pap | [2024-08-13T17:02:54.564+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=94134ac7-f438-4299-a357-6d4733bc0237, expireMs=1723568604456] policy-pap | [2024-08-13T17:02:54.564+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-93de6edc-9eb2-414e-a019-842a996e1c21 PdpUpdate stopping listener policy-pap | [2024-08-13T17:02:54.564+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-93de6edc-9eb2-414e-a019-842a996e1c21 PdpUpdate stopped policy-pap | [2024-08-13T17:02:54.569+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-93de6edc-9eb2-414e-a019-842a996e1c21 PdpUpdate successful policy-pap | [2024-08-13T17:02:54.569+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-93de6edc-9eb2-414e-a019-842a996e1c21 start publishing next request policy-pap | [2024-08-13T17:02:54.570+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-93de6edc-9eb2-414e-a019-842a996e1c21 PdpStateChange starting policy-pap | [2024-08-13T17:02:54.570+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-93de6edc-9eb2-414e-a019-842a996e1c21 PdpStateChange starting listener policy-pap | [2024-08-13T17:02:54.570+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-93de6edc-9eb2-414e-a019-842a996e1c21 PdpStateChange starting timer policy-pap | [2024-08-13T17:02:54.570+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=8d28b914-0ef0-4ad6-96b5-835801266569, expireMs=1723568604570] policy-pap | [2024-08-13T17:02:54.570+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 30000ms Timer [name=8d28b914-0ef0-4ad6-96b5-835801266569, expireMs=1723568604570] policy-pap | [2024-08-13T17:02:54.570+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-93de6edc-9eb2-414e-a019-842a996e1c21 PdpStateChange starting enqueue policy-pap | [2024-08-13T17:02:54.570+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-93de6edc-9eb2-414e-a019-842a996e1c21 PdpStateChange started policy-pap | [2024-08-13T17:02:54.571+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-f8d5c3cf-5399-4bf8-9cca-c4b375875f74","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"8d28b914-0ef0-4ad6-96b5-835801266569","timestampMs":1723568574439,"name":"apex-93de6edc-9eb2-414e-a019-842a996e1c21","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-08-13T17:02:54.644+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-f8d5c3cf-5399-4bf8-9cca-c4b375875f74","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"8d28b914-0ef0-4ad6-96b5-835801266569","timestampMs":1723568574439,"name":"apex-93de6edc-9eb2-414e-a019-842a996e1c21","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-08-13T17:02:54.644+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE policy-pap | [2024-08-13T17:02:54.647+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"8d28b914-0ef0-4ad6-96b5-835801266569","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"7b022d67-c7cd-493b-8723-523293c4e0d4","timestampMs":1723568574602,"name":"apex-93de6edc-9eb2-414e-a019-842a996e1c21","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-08-13T17:02:54.779+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-93de6edc-9eb2-414e-a019-842a996e1c21 PdpStateChange stopping policy-pap | [2024-08-13T17:02:54.779+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-93de6edc-9eb2-414e-a019-842a996e1c21 PdpStateChange stopping enqueue policy-pap | [2024-08-13T17:02:54.779+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-93de6edc-9eb2-414e-a019-842a996e1c21 PdpStateChange stopping timer policy-pap | [2024-08-13T17:02:54.779+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=8d28b914-0ef0-4ad6-96b5-835801266569, expireMs=1723568604570] policy-pap | [2024-08-13T17:02:54.779+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-93de6edc-9eb2-414e-a019-842a996e1c21 PdpStateChange stopping listener policy-pap | [2024-08-13T17:02:54.779+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-93de6edc-9eb2-414e-a019-842a996e1c21 PdpStateChange stopped policy-pap | [2024-08-13T17:02:54.779+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-93de6edc-9eb2-414e-a019-842a996e1c21 PdpStateChange successful policy-pap | [2024-08-13T17:02:54.779+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-93de6edc-9eb2-414e-a019-842a996e1c21 start publishing next request policy-pap | [2024-08-13T17:02:54.779+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-93de6edc-9eb2-414e-a019-842a996e1c21 PdpUpdate starting policy-pap | [2024-08-13T17:02:54.780+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-93de6edc-9eb2-414e-a019-842a996e1c21 PdpUpdate starting listener policy-pap | [2024-08-13T17:02:54.780+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-93de6edc-9eb2-414e-a019-842a996e1c21 PdpUpdate starting timer policy-pap | [2024-08-13T17:02:54.780+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=89e0643d-8cc1-42a9-a0be-206cd431b61c, expireMs=1723568604780] policy-pap | [2024-08-13T17:02:54.780+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-93de6edc-9eb2-414e-a019-842a996e1c21 PdpUpdate starting enqueue policy-pap | [2024-08-13T17:02:54.780+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-93de6edc-9eb2-414e-a019-842a996e1c21 PdpUpdate started policy-pap | [2024-08-13T17:02:54.794+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-f8d5c3cf-5399-4bf8-9cca-c4b375875f74","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"89e0643d-8cc1-42a9-a0be-206cd431b61c","timestampMs":1723568574635,"name":"apex-93de6edc-9eb2-414e-a019-842a996e1c21","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-08-13T17:02:54.806+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"94134ac7-f438-4299-a357-6d4733bc0237","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"a6bebc18-1074-4c55-8c9b-c1c8e9d7314a","timestampMs":1723568574551,"name":"apex-93de6edc-9eb2-414e-a019-842a996e1c21","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-08-13T17:02:54.807+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 94134ac7-f438-4299-a357-6d4733bc0237 policy-pap | [2024-08-13T17:02:54.811+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-f8d5c3cf-5399-4bf8-9cca-c4b375875f74","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"8d28b914-0ef0-4ad6-96b5-835801266569","timestampMs":1723568574439,"name":"apex-93de6edc-9eb2-414e-a019-842a996e1c21","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-08-13T17:02:54.811+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE policy-pap | [2024-08-13T17:02:54.811+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"8d28b914-0ef0-4ad6-96b5-835801266569","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"7b022d67-c7cd-493b-8723-523293c4e0d4","timestampMs":1723568574602,"name":"apex-93de6edc-9eb2-414e-a019-842a996e1c21","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-08-13T17:02:54.812+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 8d28b914-0ef0-4ad6-96b5-835801266569 policy-pap | [2024-08-13T17:02:54.817+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-f8d5c3cf-5399-4bf8-9cca-c4b375875f74","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"89e0643d-8cc1-42a9-a0be-206cd431b61c","timestampMs":1723568574635,"name":"apex-93de6edc-9eb2-414e-a019-842a996e1c21","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-08-13T17:02:54.817+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2024-08-13T17:02:54.819+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-f8d5c3cf-5399-4bf8-9cca-c4b375875f74","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"89e0643d-8cc1-42a9-a0be-206cd431b61c","timestampMs":1723568574635,"name":"apex-93de6edc-9eb2-414e-a019-842a996e1c21","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-08-13T17:02:54.820+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2024-08-13T17:02:54.822+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"89e0643d-8cc1-42a9-a0be-206cd431b61c","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"f4901ce9-22c6-40ec-b9d8-6310524b71cf","timestampMs":1723568574812,"name":"apex-93de6edc-9eb2-414e-a019-842a996e1c21","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-08-13T17:02:54.823+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"89e0643d-8cc1-42a9-a0be-206cd431b61c","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"f4901ce9-22c6-40ec-b9d8-6310524b71cf","timestampMs":1723568574812,"name":"apex-93de6edc-9eb2-414e-a019-842a996e1c21","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-08-13T17:02:54.823+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 89e0643d-8cc1-42a9-a0be-206cd431b61c policy-pap | [2024-08-13T17:02:54.823+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-93de6edc-9eb2-414e-a019-842a996e1c21 PdpUpdate stopping policy-pap | [2024-08-13T17:02:54.823+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-93de6edc-9eb2-414e-a019-842a996e1c21 PdpUpdate stopping enqueue policy-pap | [2024-08-13T17:02:54.823+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-93de6edc-9eb2-414e-a019-842a996e1c21 PdpUpdate stopping timer policy-pap | [2024-08-13T17:02:54.824+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=89e0643d-8cc1-42a9-a0be-206cd431b61c, expireMs=1723568604780] policy-pap | [2024-08-13T17:02:54.824+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-93de6edc-9eb2-414e-a019-842a996e1c21 PdpUpdate stopping listener policy-pap | [2024-08-13T17:02:54.824+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-93de6edc-9eb2-414e-a019-842a996e1c21 PdpUpdate stopped policy-pap | [2024-08-13T17:02:54.827+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-93de6edc-9eb2-414e-a019-842a996e1c21 PdpUpdate successful policy-pap | [2024-08-13T17:02:54.827+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-93de6edc-9eb2-414e-a019-842a996e1c21 has no more requests policy-pap | [2024-08-13T17:03:24.457+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=94134ac7-f438-4299-a357-6d4733bc0237, expireMs=1723568604456] policy-pap | [2024-08-13T17:03:24.570+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=8d28b914-0ef0-4ad6-96b5-835801266569, expireMs=1723568604570] policy-pap | [2024-08-13T17:03:29.751+00:00|WARN|NonInjectionManager|pool-2-thread-1] Falling back to injection-less client. policy-pap | [2024-08-13T17:03:29.802+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls policy-pap | [2024-08-13T17:03:29.810+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls policy-pap | [2024-08-13T17:03:29.814+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls policy-pap | [2024-08-13T17:03:30.233+00:00|INFO|SessionData|http-nio-6969-exec-7] unknown group testGroup policy-pap | [2024-08-13T17:03:30.803+00:00|INFO|SessionData|http-nio-6969-exec-7] create cached group testGroup policy-pap | [2024-08-13T17:03:30.804+00:00|INFO|SessionData|http-nio-6969-exec-7] creating DB group testGroup policy-pap | [2024-08-13T17:03:31.381+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup policy-pap | [2024-08-13T17:03:31.609+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] Registering a deploy for policy onap.restart.tca 1.0.0 policy-pap | [2024-08-13T17:03:31.694+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] Registering a deploy for policy operational.apex.decisionMaker 1.0.0 policy-pap | [2024-08-13T17:03:31.694+00:00|INFO|SessionData|http-nio-6969-exec-1] update cached group testGroup policy-pap | [2024-08-13T17:03:31.694+00:00|INFO|SessionData|http-nio-6969-exec-1] updating DB group testGroup policy-pap | [2024-08-13T17:03:31.710+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2024-08-13T17:03:31Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=DEPLOYMENT, timestamp=2024-08-13T17:03:31Z, user=policyadmin)] policy-pap | [2024-08-13T17:03:32.418+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group testGroup policy-pap | [2024-08-13T17:03:32.419+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-5] remove policy onap.restart.tca 1.0.0 from subgroup testGroup pdpTypeA count=0 policy-pap | [2024-08-13T17:03:32.419+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] Registering an undeploy for policy onap.restart.tca 1.0.0 policy-pap | [2024-08-13T17:03:32.420+00:00|INFO|SessionData|http-nio-6969-exec-5] update cached group testGroup policy-pap | [2024-08-13T17:03:32.420+00:00|INFO|SessionData|http-nio-6969-exec-5] updating DB group testGroup policy-pap | [2024-08-13T17:03:32.435+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2024-08-13T17:03:32Z, user=policyadmin)] policy-pap | [2024-08-13T17:03:32.779+00:00|INFO|SessionData|http-nio-6969-exec-6] cache group defaultGroup policy-pap | [2024-08-13T17:03:32.779+00:00|INFO|SessionData|http-nio-6969-exec-6] cache group testGroup policy-pap | [2024-08-13T17:03:32.779+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-6] remove policy operational.apex.decisionMaker 1.0.0 from subgroup testGroup pdpTypeC count=0 policy-pap | [2024-08-13T17:03:32.779+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] Registering an undeploy for policy operational.apex.decisionMaker 1.0.0 policy-pap | [2024-08-13T17:03:32.779+00:00|INFO|SessionData|http-nio-6969-exec-6] update cached group testGroup policy-pap | [2024-08-13T17:03:32.779+00:00|INFO|SessionData|http-nio-6969-exec-6] updating DB group testGroup policy-pap | [2024-08-13T17:03:32.788+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=UNDEPLOYMENT, timestamp=2024-08-13T17:03:32Z, user=policyadmin)] policy-pap | [2024-08-13T17:03:33.370+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup policy-pap | [2024-08-13T17:03:33.372+00:00|INFO|SessionData|http-nio-6969-exec-1] deleting DB group testGroup policy-pap | [2024-08-13T17:04:32.913+00:00|INFO|PdpModifyRequestMap|pool-3-thread-1] check for PDP records older than 360000ms =================================== ======== Logs from prometheus ======== prometheus | ts=2024-08-13T17:01:59.210Z caller=main.go:589 level=info msg="No time or size retention was set so using the default time retention" duration=15d prometheus | ts=2024-08-13T17:01:59.210Z caller=main.go:633 level=info msg="Starting Prometheus Server" mode=server version="(version=2.53.2, branch=HEAD, revision=6e971a7dc905696d4bc4ffa150bf282fcfac5fa9)" prometheus | ts=2024-08-13T17:01:59.210Z caller=main.go:638 level=info build_context="(go=go1.22.6, platform=linux/amd64, user=root@363b0aa99939, date=20240809-14:55:04, tags=netgo,builtinassets,stringlabels)" prometheus | ts=2024-08-13T17:01:59.210Z caller=main.go:639 level=info host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" prometheus | ts=2024-08-13T17:01:59.210Z caller=main.go:640 level=info fd_limits="(soft=1048576, hard=1048576)" prometheus | ts=2024-08-13T17:01:59.210Z caller=main.go:641 level=info vm_limits="(soft=unlimited, hard=unlimited)" prometheus | ts=2024-08-13T17:01:59.213Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090 prometheus | ts=2024-08-13T17:01:59.213Z caller=main.go:1148 level=info msg="Starting TSDB ..." prometheus | ts=2024-08-13T17:01:59.217Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9090 prometheus | ts=2024-08-13T17:01:59.217Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9090 prometheus | ts=2024-08-13T17:01:59.222Z caller=head.go:626 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" prometheus | ts=2024-08-13T17:01:59.222Z caller=head.go:713 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=1.87µs prometheus | ts=2024-08-13T17:01:59.222Z caller=head.go:721 level=info component=tsdb msg="Replaying WAL, this may take a while" prometheus | ts=2024-08-13T17:01:59.223Z caller=head.go:793 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 prometheus | ts=2024-08-13T17:01:59.223Z caller=head.go:830 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=37.791µs wal_replay_duration=696.678µs wbl_replay_duration=340ns chunk_snapshot_load_duration=0s mmap_chunk_replay_duration=1.87µs total_replay_duration=765.469µs prometheus | ts=2024-08-13T17:01:59.226Z caller=main.go:1169 level=info fs_type=EXT4_SUPER_MAGIC prometheus | ts=2024-08-13T17:01:59.226Z caller=main.go:1172 level=info msg="TSDB started" prometheus | ts=2024-08-13T17:01:59.226Z caller=main.go:1354 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml prometheus | ts=2024-08-13T17:01:59.227Z caller=main.go:1391 level=info msg="updated GOGC" old=100 new=75 prometheus | ts=2024-08-13T17:01:59.227Z caller=main.go:1402 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=1.462637ms db_storage=1.51µs remote_storage=3.5µs web_handler=770ns query_engine=1.4µs scrape=364.984µs scrape_sd=136.242µs notify=33.02µs notify_sd=9.041µs rules=2.59µs tracing=7.67µs prometheus | ts=2024-08-13T17:01:59.227Z caller=main.go:1133 level=info msg="Server is ready to receive web requests." prometheus | ts=2024-08-13T17:01:59.227Z caller=manager.go:164 level=info component="rule manager" msg="Starting rule manager..." =================================== ======== Logs from simulator ======== simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json simulator | overriding logback.xml simulator | 2024-08-13 17:01:55,503 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json simulator | 2024-08-13 17:01:55,580 INFO org.onap.policy.models.simulators starting simulator | 2024-08-13 17:01:55,580 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties simulator | 2024-08-13 17:01:55,770 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION simulator | 2024-08-13 17:01:55,771 INFO org.onap.policy.models.simulators starting A&AI simulator simulator | 2024-08-13 17:01:55,882 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START simulator | 2024-08-13 17:01:55,893 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-08-13 17:01:55,897 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-08-13 17:01:55,904 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 simulator | 2024-08-13 17:01:55,974 INFO Session workerName=node0 simulator | 2024-08-13 17:01:56,559 INFO Using GSON for REST calls simulator | 2024-08-13 17:01:56,650 INFO Started o.e.j.s.ServletContextHandler@3f6db3fb{/,null,AVAILABLE} simulator | 2024-08-13 17:01:56,659 INFO Started A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} simulator | 2024-08-13 17:01:56,666 INFO Started Server@64a8c844{STARTING}[11.0.20,sto=0] @1619ms simulator | 2024-08-13 17:01:56,666 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,AVAILABLE}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4230 ms. simulator | 2024-08-13 17:01:56,676 INFO org.onap.policy.models.simulators starting SDNC simulator simulator | 2024-08-13 17:01:56,679 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START simulator | 2024-08-13 17:01:56,679 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-08-13 17:01:56,680 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-08-13 17:01:56,680 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 simulator | 2024-08-13 17:01:56,692 INFO Session workerName=node0 simulator | 2024-08-13 17:01:56,761 INFO Using GSON for REST calls simulator | 2024-08-13 17:01:56,771 INFO Started o.e.j.s.ServletContextHandler@b70da4c{/,null,AVAILABLE} simulator | 2024-08-13 17:01:56,773 INFO Started SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} simulator | 2024-08-13 17:01:56,773 INFO Started Server@70efb718{STARTING}[11.0.20,sto=0] @1727ms simulator | 2024-08-13 17:01:56,773 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,AVAILABLE}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4907 ms. simulator | 2024-08-13 17:01:56,774 INFO org.onap.policy.models.simulators starting SO simulator simulator | 2024-08-13 17:01:56,777 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START simulator | 2024-08-13 17:01:56,777 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-08-13 17:01:56,778 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-08-13 17:01:56,778 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 simulator | 2024-08-13 17:01:56,785 INFO Session workerName=node0 simulator | 2024-08-13 17:01:56,847 INFO Using GSON for REST calls simulator | 2024-08-13 17:01:56,860 INFO Started o.e.j.s.ServletContextHandler@696f0212{/,null,AVAILABLE} simulator | 2024-08-13 17:01:56,861 INFO Started SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} simulator | 2024-08-13 17:01:56,862 INFO Started Server@b7838a9{STARTING}[11.0.20,sto=0] @1815ms simulator | 2024-08-13 17:01:56,862 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,AVAILABLE}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4916 ms. simulator | 2024-08-13 17:01:56,862 INFO org.onap.policy.models.simulators starting VFC simulator simulator | 2024-08-13 17:01:56,871 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START simulator | 2024-08-13 17:01:56,872 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-08-13 17:01:56,872 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-08-13 17:01:56,873 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 simulator | 2024-08-13 17:01:56,878 INFO Session workerName=node0 simulator | 2024-08-13 17:01:56,938 INFO Using GSON for REST calls simulator | 2024-08-13 17:01:56,947 INFO Started o.e.j.s.ServletContextHandler@19553973{/,null,AVAILABLE} simulator | 2024-08-13 17:01:56,948 INFO Started VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670} simulator | 2024-08-13 17:01:56,949 INFO Started Server@f478a81{STARTING}[11.0.20,sto=0] @1902ms simulator | 2024-08-13 17:01:56,949 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,AVAILABLE}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4923 ms. simulator | 2024-08-13 17:01:56,950 INFO org.onap.policy.models.simulators started =================================== ======== Logs from zookeeper ======== zookeeper | ===> User zookeeper | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) zookeeper | ===> Configuring ... zookeeper | ===> Running preflight checks ... zookeeper | ===> Check if /var/lib/zookeeper/data is writable ... zookeeper | ===> Check if /var/lib/zookeeper/log is writable ... zookeeper | ===> Launching ... zookeeper | ===> Launching zookeeper ... zookeeper | [2024-08-13 17:01:59,785] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-08-13 17:01:59,787] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-08-13 17:01:59,787] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-08-13 17:01:59,787] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-08-13 17:01:59,787] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-08-13 17:01:59,789] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2024-08-13 17:01:59,789] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2024-08-13 17:01:59,789] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2024-08-13 17:01:59,789] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) zookeeper | [2024-08-13 17:01:59,790] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) zookeeper | [2024-08-13 17:01:59,791] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-08-13 17:01:59,791] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-08-13 17:01:59,791] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-08-13 17:01:59,791] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-08-13 17:01:59,791] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-08-13 17:01:59,791] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) zookeeper | [2024-08-13 17:01:59,802] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@75c072cb (org.apache.zookeeper.server.ServerMetrics) zookeeper | [2024-08-13 17:01:59,804] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper | [2024-08-13 17:01:59,804] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper | [2024-08-13 17:01:59,807] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2024-08-13 17:01:59,815] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-13 17:01:59,815] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-13 17:01:59,815] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-13 17:01:59,815] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-13 17:01:59,815] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-13 17:01:59,815] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-13 17:01:59,815] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-13 17:01:59,815] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-13 17:01:59,815] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-13 17:01:59,816] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-13 17:01:59,817] INFO Server environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-13 17:01:59,817] INFO Server environment:host.name=zookeeper (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-13 17:01:59,817] INFO Server environment:java.version=17.0.12 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-13 17:01:59,817] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-13 17:01:59,817] INFO Server environment:java.home=/usr/lib/jvm/java-17-zulu-openjdk-ca (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-13 17:01:59,817] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/connect-transforms-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/protobuf-java-3.23.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-mirror-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/netty-common-4.1.110.Final.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.16.2.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.110.Final.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/connect-api-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.16.2.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.6-3.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.16.2.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.110.Final.jar:/usr/bin/../share/java/kafka/connect-runtime-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/kafka-shell-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.110.Final.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/scala-library-2.13.12.jar:/usr/bin/../share/java/kafka/trogdor-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.110.Final.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.110.Final.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.110.Final.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.110.Final.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.110.Final.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.12.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/kafka-raft-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-tools-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-server-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-databind-2.16.2.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/opentelemetry-proto-1.0.0-alpha.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.16.2.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.16.2.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-core-2.16.2.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/kafka-clients-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/connect-json-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.7.0-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-13 17:01:59,817] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-13 17:01:59,817] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-13 17:01:59,817] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-13 17:01:59,817] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-13 17:01:59,817] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-13 17:01:59,817] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-13 17:01:59,817] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-13 17:01:59,817] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-13 17:01:59,818] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-13 17:01:59,818] INFO Server environment:os.memory.free=495MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-13 17:01:59,818] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-13 17:01:59,818] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-13 17:01:59,818] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-13 17:01:59,818] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-13 17:01:59,818] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-13 17:01:59,818] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-13 17:01:59,818] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-13 17:01:59,818] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-13 17:01:59,819] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-13 17:01:59,819] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) zookeeper | [2024-08-13 17:01:59,820] INFO minSessionTimeout set to 6000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-13 17:01:59,820] INFO maxSessionTimeout set to 60000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-13 17:01:59,821] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper | [2024-08-13 17:01:59,821] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper | [2024-08-13 17:01:59,823] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2024-08-13 17:01:59,823] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2024-08-13 17:01:59,823] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2024-08-13 17:01:59,823] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2024-08-13 17:01:59,823] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2024-08-13 17:01:59,823] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2024-08-13 17:01:59,825] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-13 17:01:59,825] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-13 17:01:59,826] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) zookeeper | [2024-08-13 17:01:59,826] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) zookeeper | [2024-08-13 17:01:59,826] INFO Created server with tickTime 3000 ms minSessionTimeout 6000 ms maxSessionTimeout 60000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-13 17:01:59,849] INFO Logging initialized @418ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) zookeeper | [2024-08-13 17:01:59,923] WARN o.e.j.s.ServletContextHandler@f5958c9{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2024-08-13 17:01:59,923] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2024-08-13 17:01:59,940] INFO jetty-9.4.54.v20240208; built: 2024-02-08T19:42:39.027Z; git: cef3fbd6d736a21e7d541a5db490381d95a2047d; jvm 17.0.12+7-LTS (org.eclipse.jetty.server.Server) zookeeper | [2024-08-13 17:01:59,963] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) zookeeper | [2024-08-13 17:01:59,963] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) zookeeper | [2024-08-13 17:01:59,964] INFO node0 Scavenging every 600000ms (org.eclipse.jetty.server.session) zookeeper | [2024-08-13 17:01:59,981] WARN ServletContext@o.e.j.s.ServletContextHandler@f5958c9{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) zookeeper | [2024-08-13 17:01:59,995] INFO Started o.e.j.s.ServletContextHandler@f5958c9{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2024-08-13 17:02:00,007] INFO Started ServerConnector@436813f3{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) zookeeper | [2024-08-13 17:02:00,007] INFO Started @582ms (org.eclipse.jetty.server.Server) zookeeper | [2024-08-13 17:02:00,007] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) zookeeper | [2024-08-13 17:02:00,011] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper | [2024-08-13 17:02:00,011] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper | [2024-08-13 17:02:00,012] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper | [2024-08-13 17:02:00,014] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper | [2024-08-13 17:02:00,027] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper | [2024-08-13 17:02:00,027] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper | [2024-08-13 17:02:00,027] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2024-08-13 17:02:00,027] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2024-08-13 17:02:00,032] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) zookeeper | [2024-08-13 17:02:00,032] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2024-08-13 17:02:00,035] INFO Snapshot loaded in 7 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2024-08-13 17:02:00,035] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2024-08-13 17:02:00,036] INFO Snapshot taken in 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-08-13 17:02:00,044] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) zookeeper | [2024-08-13 17:02:00,045] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) zookeeper | [2024-08-13 17:02:00,058] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) zookeeper | [2024-08-13 17:02:00,059] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) zookeeper | [2024-08-13 17:02:01,300] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) =================================== Tearing down containers... Container grafana Stopping Container policy-apex-pdp Stopping Container policy-csit Stopping Container policy-csit Stopped Container policy-csit Removing Container policy-csit Removed Container grafana Stopped Container grafana Removing Container grafana Removed Container prometheus Stopping Container prometheus Stopped Container prometheus Removing Container prometheus Removed Container policy-apex-pdp Stopped Container policy-apex-pdp Removing Container policy-apex-pdp Removed Container simulator Stopping Container policy-pap Stopping Container policy-pap Stopped Container policy-pap Removing Container policy-pap Removed Container policy-api Stopping Container kafka Stopping Container simulator Stopped Container simulator Removing Container simulator Removed Container kafka Stopped Container kafka Removing Container kafka Removed Container zookeeper Stopping Container zookeeper Stopped Container zookeeper Removing Container zookeeper Removed Container policy-api Stopped Container policy-api Removing Container policy-api Removed Container policy-db-migrator Stopping Container policy-db-migrator Stopped Container policy-db-migrator Removing Container policy-db-migrator Removed Container mariadb Stopping Container mariadb Stopped Container mariadb Removing Container mariadb Removed Network compose_default Removing Network compose_default Removed $ ssh-agent -k unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 2099 killed; [ssh-agent] Stopped. Robot results publisher started... INFO: Checking test criticality is deprecated and will be dropped in a future release! -Parsing output xml: Done! -Copying log files to build dir: Done! -Assigning results to build: Done! -Checking thresholds: Done! Done publishing Robot results. [PostBuildScript] - [INFO] Executing post build scripts. [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins2847273069907034360.sh ---> sysstat.sh [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins1055537726740135154.sh ---> package-listing.sh ++ facter osfamily ++ tr '[:upper:]' '[:lower:]' + OS_FAMILY=debian + workspace=/w/workspace/policy-pap-newdelhi-project-csit-pap + START_PACKAGES=/tmp/packages_start.txt + END_PACKAGES=/tmp/packages_end.txt + DIFF_PACKAGES=/tmp/packages_diff.txt + PACKAGES=/tmp/packages_start.txt + '[' /w/workspace/policy-pap-newdelhi-project-csit-pap ']' + PACKAGES=/tmp/packages_end.txt + case "${OS_FAMILY}" in + dpkg -l + grep '^ii' + '[' -f /tmp/packages_start.txt ']' + '[' -f /tmp/packages_end.txt ']' + diff /tmp/packages_start.txt /tmp/packages_end.txt + '[' /w/workspace/policy-pap-newdelhi-project-csit-pap ']' + mkdir -p /w/workspace/policy-pap-newdelhi-project-csit-pap/archives/ + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-pap-newdelhi-project-csit-pap/archives/ [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins6836324832668290380.sh ---> capture-instance-metadata.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-newdelhi-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-H818 from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-H818/bin to PATH INFO: Running in OpenStack, capturing instance metadata [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins14781729027315183472.sh provisioning config files... copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-pap-newdelhi-project-csit-pap@tmp/config11384463395161900376tmp Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] Run condition [Regular expression match] preventing perform for step [Provide Configuration files] [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SERVER_ID=logs [EnvInject] - Variables injected successfully. [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins10724903648482704785.sh ---> create-netrc.sh [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins16785977355706301357.sh ---> python-tools-install.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-newdelhi-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-H818 from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-H818/bin to PATH [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins16661170254145630642.sh ---> sudo-logs.sh Archiving 'sudo' log.. [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins8254592786907323900.sh ---> job-cost.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-newdelhi-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-H818 from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 lf-activate-venv(): INFO: Adding /tmp/venv-H818/bin to PATH INFO: No Stack... INFO: Retrieving Pricing Info for: v3-standard-8 INFO: Archiving Costs [policy-pap-newdelhi-project-csit-pap] $ /bin/bash -l /tmp/jenkins10189730114693655559.sh ---> logs-deploy.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-newdelhi-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-H818 from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-H818/bin to PATH INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-pap-newdelhi-project-csit-pap/87 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt Archives upload complete. INFO: archiving logs to Nexus ---> uname -a: Linux prd-ubuntu1804-docker-8c-8g-30748 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux ---> lscpu: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 8 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC-Rome Processor Stepping: 0 CPU MHz: 2799.998 BogoMIPS: 5599.99 Virtualization: AMD-V Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-7 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities ---> nproc: 8 ---> df -h: Filesystem Size Used Avail Use% Mounted on udev 16G 0 16G 0% /dev tmpfs 3.2G 708K 3.2G 1% /run /dev/vda1 155G 14G 142G 9% / tmpfs 16G 0 16G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/vda15 105M 4.4M 100M 5% /boot/efi tmpfs 3.2G 0 3.2G 0% /run/user/1001 ---> free -m: total used free shared buff/cache available Mem: 32167 877 25149 0 6140 30834 Swap: 1023 0 1023 ---> ip addr: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 link/ether fa:16:3e:b5:aa:e7 brd ff:ff:ff:ff:ff:ff inet 10.30.107.176/23 brd 10.30.107.255 scope global dynamic ens3 valid_lft 86017sec preferred_lft 86017sec inet6 fe80::f816:3eff:feb5:aae7/64 scope link valid_lft forever preferred_lft forever 3: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:ed:f1:d2:51 brd ff:ff:ff:ff:ff:ff inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:edff:fef1:d251/64 scope link valid_lft forever preferred_lft forever ---> sar -b -r -n DEV: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-30748) 08/13/24 _x86_64_ (8 CPU) 16:59:26 LINUX RESTART (8 CPU) 17:00:02 tps rtps wtps bread/s bwrtn/s 17:01:01 350.56 39.19 311.37 1776.15 29275.88 17:02:01 486.70 32.83 453.87 3036.80 162882.17 17:03:01 264.52 0.22 264.31 27.86 42285.97 17:04:01 24.18 0.57 23.61 19.86 20363.74 17:05:01 42.80 0.38 42.42 18.66 20642.32 Average: 233.36 14.56 218.80 973.17 55168.98 17:00:02 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 17:01:01 30032840 31669820 2906380 8.82 72988 1870976 1445056 4.25 907908 1699308 205956 17:02:01 25736624 31260756 7202596 21.87 133024 5536176 5844248 17.20 1506088 5137372 4100 17:03:01 23414572 29527700 9524648 28.92 169492 6042448 9025820 26.56 3393124 5511396 134756 17:04:01 23610992 29573740 9328228 28.32 169776 5894004 9064396 26.67 3347284 5360776 252 17:05:01 25051836 30903032 7887384 23.95 170512 5787072 4352296 12.81 2038236 5258436 404 Average: 25569373 30587010 7369847 22.37 143158 5026135 5946363 17.50 2238528 4593458 69094 17:00:02 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 17:01:01 ens3 158.84 98.24 1222.48 32.13 0.00 0.00 0.00 0.00 17:01:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:01:01 lo 1.39 1.39 0.16 0.16 0.00 0.00 0.00 0.00 17:02:01 vethecfefa0 0.00 0.22 0.00 0.01 0.00 0.00 0.00 0.00 17:02:01 vetha197742 0.15 0.27 0.01 0.02 0.00 0.00 0.00 0.00 17:02:01 vethec900e6 0.17 0.30 0.01 0.02 0.00 0.00 0.00 0.00 17:02:01 br-e18f9db58276 0.13 0.12 0.00 0.01 0.00 0.00 0.00 0.00 17:03:01 vethecfefa0 0.00 0.17 0.00 0.01 0.00 0.00 0.00 0.00 17:03:01 vetha197742 3.15 3.75 0.61 0.39 0.00 0.00 0.00 0.00 17:03:01 vethec900e6 10.73 10.78 2.09 1.73 0.00 0.00 0.00 0.00 17:03:01 br-e18f9db58276 0.97 0.83 0.08 0.38 0.00 0.00 0.00 0.00 17:04:01 vethecfefa0 0.00 0.03 0.00 0.00 0.00 0.00 0.00 0.00 17:04:01 vetha197742 3.22 4.72 0.66 0.36 0.00 0.00 0.00 0.00 17:04:01 vethec900e6 43.78 40.33 14.31 38.10 0.00 0.00 0.00 0.00 17:04:01 br-e18f9db58276 0.35 0.20 0.02 0.01 0.00 0.00 0.00 0.00 17:05:01 br-e18f9db58276 0.02 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:05:01 veth0642a5b 46.62 40.89 18.36 39.96 0.00 0.00 0.00 0.00 17:05:01 vethaa84e3e 42.45 32.47 4.06 4.60 0.00 0.00 0.00 0.00 17:05:01 veth51a2b95 5.16 7.31 0.83 0.97 0.00 0.00 0.00 0.00 Average: br-e18f9db58276 0.29 0.23 0.02 0.08 0.00 0.00 0.00 0.00 Average: veth0642a5b 9.36 8.21 3.68 8.02 0.00 0.00 0.00 0.00 Average: vethaa84e3e 8.52 6.52 0.82 0.92 0.00 0.00 0.00 0.00 Average: veth51a2b95 1.04 1.47 0.17 0.20 0.00 0.00 0.00 0.00 ---> sar -P ALL: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-30748) 08/13/24 _x86_64_ (8 CPU) 16:59:26 LINUX RESTART (8 CPU) 17:00:02 CPU %user %nice %system %iowait %steal %idle 17:01:01 all 9.73 0.00 0.96 2.87 0.04 86.40 17:01:01 0 15.56 0.00 2.00 3.60 0.05 78.78 17:01:01 1 3.27 0.00 0.73 14.49 0.05 81.46 17:01:01 2 2.17 0.00 0.66 0.88 0.02 96.27 17:01:01 3 7.03 0.00 0.63 0.27 0.07 92.01 17:01:01 4 1.56 0.00 0.58 0.25 0.03 97.58 17:01:01 5 1.25 0.00 0.32 0.03 0.02 98.37 17:01:01 6 30.35 0.00 1.58 2.51 0.05 65.51 17:01:01 7 16.69 0.00 1.25 0.98 0.03 81.04 17:02:01 all 16.49 0.00 7.27 10.25 0.07 65.92 17:02:01 0 16.31 0.00 6.84 2.36 0.07 74.42 17:02:01 1 16.80 0.00 7.00 5.77 0.05 70.38 17:02:01 2 17.30 0.00 6.39 0.56 0.05 75.71 17:02:01 3 14.73 0.00 7.04 9.15 0.07 69.02 17:02:01 4 16.05 0.00 7.86 20.12 0.08 55.88 17:02:01 5 14.56 0.00 8.21 20.10 0.07 57.06 17:02:01 6 19.70 0.00 7.24 1.36 0.07 71.63 17:02:01 7 16.41 0.00 7.56 22.82 0.08 53.13 17:03:01 all 27.44 0.00 4.37 2.76 0.09 65.34 17:03:01 0 28.16 0.00 4.91 1.64 0.10 65.19 17:03:01 1 26.45 0.00 4.04 0.71 0.08 68.72 17:03:01 2 26.56 0.00 4.22 0.44 0.10 68.68 17:03:01 3 27.48 0.00 4.26 5.71 0.08 62.46 17:03:01 4 29.32 0.00 4.57 6.21 0.08 59.81 17:03:01 5 27.12 0.00 4.15 2.31 0.10 66.32 17:03:01 6 27.87 0.00 4.88 1.03 0.08 66.14 17:03:01 7 26.57 0.00 3.94 3.99 0.08 65.42 17:04:01 all 5.98 0.00 0.77 1.17 0.05 92.04 17:04:01 0 5.08 0.00 0.65 0.02 0.05 94.20 17:04:01 1 4.80 0.00 0.70 0.65 0.03 93.81 17:04:01 2 7.00 0.00 0.75 0.20 0.05 92.00 17:04:01 3 5.32 0.00 0.70 0.12 0.05 93.81 17:04:01 4 7.88 0.00 0.72 8.13 0.05 83.22 17:04:01 5 5.61 0.00 0.90 0.08 0.05 93.35 17:04:01 6 7.42 0.00 1.13 0.13 0.07 91.25 17:04:01 7 4.68 0.00 0.58 0.05 0.05 94.64 17:05:01 all 2.18 0.00 0.56 1.25 0.04 95.96 17:05:01 0 3.37 0.00 0.64 0.00 0.03 95.96 17:05:01 1 1.90 0.00 0.60 0.18 0.03 97.28 17:05:01 2 2.14 0.00 0.69 0.00 0.03 97.14 17:05:01 3 1.85 0.00 0.62 0.68 0.02 96.83 17:05:01 4 2.78 0.00 0.42 8.82 0.07 87.92 17:05:01 5 1.69 0.00 0.47 0.07 0.05 97.72 17:05:01 6 1.82 0.00 0.53 0.23 0.02 97.39 17:05:01 7 1.92 0.00 0.50 0.03 0.05 97.49 Average: all 12.33 0.00 2.78 3.65 0.06 81.18 Average: 0 13.63 0.00 3.00 1.51 0.06 81.80 Average: 1 10.64 0.00 2.61 4.33 0.05 82.37 Average: 2 11.02 0.00 2.53 0.41 0.05 85.98 Average: 3 11.25 0.00 2.64 3.18 0.06 82.87 Average: 4 11.50 0.00 2.82 8.71 0.06 76.91 Average: 5 10.03 0.00 2.80 4.49 0.06 82.63 Average: 6 17.36 0.00 3.07 1.05 0.06 78.46 Average: 7 13.21 0.00 2.75 5.53 0.06 78.44