17:01:40 Started by timer 17:01:40 Running as SYSTEM 17:01:40 [EnvInject] - Loading node environment variables. 17:01:40 Building remotely on prd-ubuntu1804-docker-8c-8g-80667 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-pap-newdelhi-project-csit-pap 17:01:40 [ssh-agent] Looking for ssh-agent implementation... 17:01:40 [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) 17:01:40 $ ssh-agent 17:01:40 SSH_AUTH_SOCK=/tmp/ssh-72yWmwcMugIo/agent.2122 17:01:40 SSH_AGENT_PID=2124 17:01:40 [ssh-agent] Started. 17:01:40 Running ssh-add (command line suppressed) 17:01:40 Identity added: /w/workspace/policy-pap-newdelhi-project-csit-pap@tmp/private_key_18083316873903205595.key (/w/workspace/policy-pap-newdelhi-project-csit-pap@tmp/private_key_18083316873903205595.key) 17:01:40 [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) 17:01:40 The recommended git tool is: NONE 17:01:42 using credential onap-jenkins-ssh 17:01:42 Wiping out workspace first. 17:01:42 Cloning the remote Git repository 17:01:42 Cloning repository git://cloud.onap.org/mirror/policy/docker.git 17:01:42 > git init /w/workspace/policy-pap-newdelhi-project-csit-pap # timeout=10 17:01:42 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git 17:01:42 > git --version # timeout=10 17:01:42 > git --version # 'git version 2.17.1' 17:01:42 using GIT_SSH to set credentials Gerrit user 17:01:42 Verifying host key using manually-configured host key entries 17:01:42 > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 17:01:42 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 17:01:42 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 17:01:43 Avoid second fetch 17:01:43 > git rev-parse refs/remotes/origin/newdelhi^{commit} # timeout=10 17:01:43 Checking out Revision a0de87f9d2d88fd7f870703053c99c7149d608ec (refs/remotes/origin/newdelhi) 17:01:43 > git config core.sparsecheckout # timeout=10 17:01:43 > git checkout -f a0de87f9d2d88fd7f870703053c99c7149d608ec # timeout=30 17:01:43 Commit message: "Fix timeout in pap CSIT for auditing undeploys" 17:01:43 > git rev-list --no-walk a0de87f9d2d88fd7f870703053c99c7149d608ec # timeout=10 17:01:46 provisioning config files... 17:01:46 copy managed file [npmrc] to file:/home/jenkins/.npmrc 17:01:46 copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf 17:01:46 [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins11397204779566080477.sh 17:01:46 ---> python-tools-install.sh 17:01:46 Setup pyenv: 17:01:46 * system (set by /opt/pyenv/version) 17:01:46 * 3.8.13 (set by /opt/pyenv/version) 17:01:46 * 3.9.13 (set by /opt/pyenv/version) 17:01:46 * 3.10.6 (set by /opt/pyenv/version) 17:01:51 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-xITB 17:01:51 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv 17:01:53 lf-activate-venv(): INFO: Installing: lftools 17:02:21 lf-activate-venv(): INFO: Adding /tmp/venv-xITB/bin to PATH 17:02:21 Generating Requirements File 17:02:40 Python 3.10.6 17:02:40 pip 24.3.1 from /tmp/venv-xITB/lib/python3.10/site-packages/pip (python 3.10) 17:02:40 appdirs==1.4.4 17:02:40 argcomplete==3.5.1 17:02:40 aspy.yaml==1.3.0 17:02:40 attrs==24.2.0 17:02:40 autopage==0.5.2 17:02:40 beautifulsoup4==4.12.3 17:02:40 boto3==1.35.52 17:02:40 botocore==1.35.52 17:02:40 bs4==0.0.2 17:02:40 cachetools==5.5.0 17:02:40 certifi==2024.8.30 17:02:40 cffi==1.17.1 17:02:40 cfgv==3.4.0 17:02:40 chardet==5.2.0 17:02:40 charset-normalizer==3.4.0 17:02:40 click==8.1.7 17:02:40 cliff==4.7.0 17:02:40 cmd2==2.5.0 17:02:40 cryptography==3.3.2 17:02:40 debtcollector==3.0.0 17:02:40 decorator==5.1.1 17:02:40 defusedxml==0.7.1 17:02:40 Deprecated==1.2.14 17:02:40 distlib==0.3.9 17:02:40 dnspython==2.7.0 17:02:40 docker==4.2.2 17:02:40 dogpile.cache==1.3.3 17:02:40 durationpy==0.9 17:02:40 email_validator==2.2.0 17:02:40 filelock==3.16.1 17:02:40 future==1.0.0 17:02:40 gitdb==4.0.11 17:02:40 GitPython==3.1.43 17:02:40 google-auth==2.35.0 17:02:40 httplib2==0.22.0 17:02:40 identify==2.6.1 17:02:40 idna==3.10 17:02:40 importlib-resources==1.5.0 17:02:40 iso8601==2.1.0 17:02:40 Jinja2==3.1.4 17:02:40 jmespath==1.0.1 17:02:40 jsonpatch==1.33 17:02:40 jsonpointer==3.0.0 17:02:40 jsonschema==4.23.0 17:02:40 jsonschema-specifications==2024.10.1 17:02:40 keystoneauth1==5.8.0 17:02:40 kubernetes==31.0.0 17:02:40 lftools==0.37.10 17:02:40 lxml==5.3.0 17:02:40 MarkupSafe==3.0.2 17:02:40 msgpack==1.1.0 17:02:40 multi_key_dict==2.0.3 17:02:40 munch==4.0.0 17:02:40 netaddr==1.3.0 17:02:40 netifaces==0.11.0 17:02:40 niet==1.4.2 17:02:40 nodeenv==1.9.1 17:02:40 oauth2client==4.1.3 17:02:40 oauthlib==3.2.2 17:02:40 openstacksdk==4.1.0 17:02:40 os-client-config==2.1.0 17:02:40 os-service-types==1.7.0 17:02:40 osc-lib==3.1.0 17:02:40 oslo.config==9.6.0 17:02:40 oslo.context==5.6.0 17:02:40 oslo.i18n==6.4.0 17:02:40 oslo.log==6.1.2 17:02:40 oslo.serialization==5.5.0 17:02:40 oslo.utils==7.3.0 17:02:40 packaging==24.1 17:02:40 pbr==6.1.0 17:02:40 platformdirs==4.3.6 17:02:40 prettytable==3.12.0 17:02:40 pyasn1==0.6.1 17:02:40 pyasn1_modules==0.4.1 17:02:40 pycparser==2.22 17:02:40 pygerrit2==2.0.15 17:02:40 PyGithub==2.4.0 17:02:40 PyJWT==2.9.0 17:02:40 PyNaCl==1.5.0 17:02:40 pyparsing==2.4.7 17:02:40 pyperclip==1.9.0 17:02:40 pyrsistent==0.20.0 17:02:40 python-cinderclient==9.6.0 17:02:40 python-dateutil==2.9.0.post0 17:02:40 python-heatclient==4.0.0 17:02:40 python-jenkins==1.8.2 17:02:40 python-keystoneclient==5.5.0 17:02:40 python-magnumclient==4.7.0 17:02:40 python-openstackclient==7.2.1 17:02:40 python-swiftclient==4.6.0 17:02:40 PyYAML==6.0.2 17:02:40 referencing==0.35.1 17:02:40 requests==2.32.3 17:02:40 requests-oauthlib==2.0.0 17:02:40 requestsexceptions==1.4.0 17:02:40 rfc3986==2.0.0 17:02:40 rpds-py==0.20.1 17:02:40 rsa==4.9 17:02:40 ruamel.yaml==0.18.6 17:02:40 ruamel.yaml.clib==0.2.12 17:02:40 s3transfer==0.10.3 17:02:40 simplejson==3.19.3 17:02:40 six==1.16.0 17:02:40 smmap==5.0.1 17:02:40 soupsieve==2.6 17:02:40 stevedore==5.3.0 17:02:40 tabulate==0.9.0 17:02:40 toml==0.10.2 17:02:40 tomlkit==0.13.2 17:02:40 tqdm==4.66.6 17:02:40 typing_extensions==4.12.2 17:02:40 tzdata==2024.2 17:02:40 urllib3==1.26.20 17:02:40 virtualenv==20.27.1 17:02:40 wcwidth==0.2.13 17:02:40 websocket-client==1.8.0 17:02:40 wrapt==1.16.0 17:02:40 xdg==6.0.0 17:02:40 xmltodict==0.14.2 17:02:40 yq==3.4.3 17:02:40 [EnvInject] - Injecting environment variables from a build step. 17:02:40 [EnvInject] - Injecting as environment variables the properties content 17:02:40 SET_JDK_VERSION=openjdk17 17:02:40 GIT_URL="git://cloud.onap.org/mirror" 17:02:40 17:02:40 [EnvInject] - Variables injected successfully. 17:02:40 [policy-pap-newdelhi-project-csit-pap] $ /bin/sh /tmp/jenkins5855775319549989398.sh 17:02:40 ---> update-java-alternatives.sh 17:02:40 ---> Updating Java version 17:02:40 ---> Ubuntu/Debian system detected 17:02:41 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode 17:02:41 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode 17:02:41 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode 17:02:41 openjdk version "17.0.4" 2022-07-19 17:02:41 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) 17:02:41 OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) 17:02:41 JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 17:02:41 [EnvInject] - Injecting environment variables from a build step. 17:02:41 [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' 17:02:41 [EnvInject] - Variables injected successfully. 17:02:41 [policy-pap-newdelhi-project-csit-pap] $ /bin/sh -xe /tmp/jenkins4061295910835308381.sh 17:02:41 + /w/workspace/policy-pap-newdelhi-project-csit-pap/csit/run-project-csit.sh pap 17:02:41 WARNING! Using --password via the CLI is insecure. Use --password-stdin. 17:02:41 WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. 17:02:41 Configure a credential helper to remove this warning. See 17:02:41 https://docs.docker.com/engine/reference/commandline/login/#credentials-store 17:02:41 17:02:41 Login Succeeded 17:02:41 docker: 'compose' is not a docker command. 17:02:41 See 'docker --help' 17:02:41 Docker Compose Plugin not installed. Installing now... 17:02:41 % Total % Received % Xferd Average Speed Time Time Time Current 17:02:41 Dload Upload Total Spent Left Speed 17:02:41 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 17:02:42 27 60.0M 27 16.5M 0 0 39.9M 0 0:00:01 --:--:-- 0:00:01 39.9M 100 60.0M 100 60.0M 0 0 76.3M 0 --:--:-- --:--:-- --:--:-- 116M 17:02:42 Setting project configuration for: pap 17:02:42 Configuring docker compose... 17:02:44 Starting apex-pdp application with Grafana 17:02:45 zookeeper Pulling 17:02:45 prometheus Pulling 17:02:45 mariadb Pulling 17:02:45 grafana Pulling 17:02:45 apex-pdp Pulling 17:02:45 policy-db-migrator Pulling 17:02:45 api Pulling 17:02:45 kafka Pulling 17:02:45 simulator Pulling 17:02:45 pap Pulling 17:02:45 31e352740f53 Pulling fs layer 17:02:45 ecc4de98d537 Pulling fs layer 17:02:45 665dfb3388a1 Pulling fs layer 17:02:45 f270a5fd7930 Pulling fs layer 17:02:45 9038eaba24f8 Pulling fs layer 17:02:45 04a7796b82ca Pulling fs layer 17:02:45 f270a5fd7930 Waiting 17:02:45 9038eaba24f8 Waiting 17:02:45 04a7796b82ca Waiting 17:02:45 31e352740f53 Pulling fs layer 17:02:45 ecc4de98d537 Pulling fs layer 17:02:45 145e9fcd3938 Pulling fs layer 17:02:45 4be774fd73e2 Pulling fs layer 17:02:45 71f834c33815 Pulling fs layer 17:02:45 a40760cd2625 Pulling fs layer 17:02:45 114f99593bd8 Pulling fs layer 17:02:45 145e9fcd3938 Waiting 17:02:45 4be774fd73e2 Waiting 17:02:45 71f834c33815 Waiting 17:02:45 a40760cd2625 Waiting 17:02:45 114f99593bd8 Waiting 17:02:45 31e352740f53 Pulling fs layer 17:02:45 ecc4de98d537 Pulling fs layer 17:02:45 1fe734c5fee3 Pulling fs layer 17:02:45 c8e6f0452a8e Pulling fs layer 17:02:45 0143f8517101 Pulling fs layer 17:02:45 ee69cc1a77e2 Pulling fs layer 17:02:45 81667b400b57 Pulling fs layer 17:02:45 ec3b6d0cc414 Pulling fs layer 17:02:45 a8d3998ab21c Pulling fs layer 17:02:45 89d6e2ec6372 Pulling fs layer 17:02:45 80096f8bb25e Pulling fs layer 17:02:45 cbd359ebc87d Pulling fs layer 17:02:45 a8d3998ab21c Waiting 17:02:45 cbd359ebc87d Waiting 17:02:45 1fe734c5fee3 Waiting 17:02:45 89d6e2ec6372 Waiting 17:02:45 c8e6f0452a8e Waiting 17:02:45 80096f8bb25e Waiting 17:02:45 0143f8517101 Waiting 17:02:45 ec3b6d0cc414 Waiting 17:02:45 31e352740f53 Pulling fs layer 17:02:45 ad1782e4d1ef Pulling fs layer 17:02:45 bc8105c6553b Pulling fs layer 17:02:45 929241f867bb Pulling fs layer 17:02:45 37728a7352e6 Pulling fs layer 17:02:45 3f40c7aa46a6 Pulling fs layer 17:02:45 ad1782e4d1ef Waiting 17:02:45 353af139d39e Pulling fs layer 17:02:45 bc8105c6553b Waiting 17:02:45 929241f867bb Waiting 17:02:45 3f40c7aa46a6 Waiting 17:02:45 353af139d39e Waiting 17:02:45 31e352740f53 Pulling fs layer 17:02:45 ecc4de98d537 Pulling fs layer 17:02:45 bda0b253c68f Pulling fs layer 17:02:45 b9357b55a7a5 Pulling fs layer 17:02:45 4c3047628e17 Pulling fs layer 17:02:45 6cf350721225 Pulling fs layer 17:02:45 de723b4c7ed9 Pulling fs layer 17:02:45 de723b4c7ed9 Waiting 17:02:45 6cf350721225 Waiting 17:02:45 4c3047628e17 Waiting 17:02:45 bda0b253c68f Waiting 17:02:45 31e352740f53 Downloading [> ] 48.06kB/3.398MB 17:02:45 31e352740f53 Downloading [> ] 48.06kB/3.398MB 17:02:45 31e352740f53 Downloading [> ] 48.06kB/3.398MB 17:02:45 31e352740f53 Downloading [> ] 48.06kB/3.398MB 17:02:45 31e352740f53 Downloading [> ] 48.06kB/3.398MB 17:02:45 665dfb3388a1 Downloading [==================================================>] 303B/303B 17:02:45 665dfb3388a1 Verifying Checksum 17:02:45 665dfb3388a1 Download complete 17:02:45 10ac4908093d Pulling fs layer 17:02:45 44779101e748 Pulling fs layer 17:02:45 a721db3e3f3d Pulling fs layer 17:02:45 1850a929b84a Pulling fs layer 17:02:45 397a918c7da3 Pulling fs layer 17:02:45 806be17e856d Pulling fs layer 17:02:45 634de6c90876 Pulling fs layer 17:02:45 cd00854cfb1a Pulling fs layer 17:02:45 397a918c7da3 Waiting 17:02:45 10ac4908093d Waiting 17:02:45 44779101e748 Waiting 17:02:45 a721db3e3f3d Waiting 17:02:45 1850a929b84a Waiting 17:02:45 806be17e856d Waiting 17:02:45 634de6c90876 Waiting 17:02:45 cd00854cfb1a Waiting 17:02:45 ecc4de98d537 Downloading [> ] 539.6kB/73.93MB 17:02:45 ecc4de98d537 Downloading [> ] 539.6kB/73.93MB 17:02:45 ecc4de98d537 Downloading [> ] 539.6kB/73.93MB 17:02:45 ecc4de98d537 Downloading [> ] 539.6kB/73.93MB 17:02:45 f270a5fd7930 Downloading [> ] 539.6kB/159.1MB 17:02:45 31e352740f53 Verifying Checksum 17:02:45 31e352740f53 Download complete 17:02:45 31e352740f53 Verifying Checksum 17:02:45 31e352740f53 Verifying Checksum 17:02:45 31e352740f53 Verifying Checksum 17:02:45 31e352740f53 Download complete 17:02:45 31e352740f53 Download complete 17:02:45 31e352740f53 Download complete 17:02:45 31e352740f53 Verifying Checksum 17:02:45 31e352740f53 Download complete 17:02:45 31e352740f53 Extracting [> ] 65.54kB/3.398MB 17:02:45 31e352740f53 Extracting [> ] 65.54kB/3.398MB 17:02:45 31e352740f53 Extracting [> ] 65.54kB/3.398MB 17:02:45 31e352740f53 Extracting [> ] 65.54kB/3.398MB 17:02:45 31e352740f53 Extracting [> ] 65.54kB/3.398MB 17:02:45 9038eaba24f8 Downloading [==================================================>] 1.153kB/1.153kB 17:02:45 9038eaba24f8 Download complete 17:02:45 04a7796b82ca Downloading [==================================================>] 1.127kB/1.127kB 17:02:45 04a7796b82ca Download complete 17:02:45 145e9fcd3938 Downloading [==================================================>] 294B/294B 17:02:45 145e9fcd3938 Verifying Checksum 17:02:45 145e9fcd3938 Download complete 17:02:45 ecc4de98d537 Downloading [======> ] 10.27MB/73.93MB 17:02:45 ecc4de98d537 Downloading [======> ] 10.27MB/73.93MB 17:02:45 ecc4de98d537 Downloading [======> ] 10.27MB/73.93MB 17:02:45 ecc4de98d537 Downloading [======> ] 10.27MB/73.93MB 17:02:45 4be774fd73e2 Downloading [=> ] 3.001kB/127.4kB 17:02:45 4be774fd73e2 Download complete 17:02:45 f270a5fd7930 Downloading [===> ] 11.89MB/159.1MB 17:02:45 71f834c33815 Downloading [==================================================>] 1.147kB/1.147kB 17:02:45 71f834c33815 Verifying Checksum 17:02:45 71f834c33815 Download complete 17:02:45 31e352740f53 Extracting [===========> ] 786.4kB/3.398MB 17:02:45 31e352740f53 Extracting [===========> ] 786.4kB/3.398MB 17:02:45 31e352740f53 Extracting [===========> ] 786.4kB/3.398MB 17:02:45 31e352740f53 Extracting [===========> ] 786.4kB/3.398MB 17:02:45 31e352740f53 Extracting [===========> ] 786.4kB/3.398MB 17:02:45 a40760cd2625 Downloading [> ] 539.6kB/84.46MB 17:02:45 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 17:02:45 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 17:02:45 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 17:02:45 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 17:02:45 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 17:02:45 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 17:02:45 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 17:02:45 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 17:02:45 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 17:02:45 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 17:02:45 ecc4de98d537 Downloading [================> ] 23.79MB/73.93MB 17:02:45 ecc4de98d537 Downloading [================> ] 23.79MB/73.93MB 17:02:45 ecc4de98d537 Downloading [================> ] 23.79MB/73.93MB 17:02:45 ecc4de98d537 Downloading [================> ] 23.79MB/73.93MB 17:02:45 f270a5fd7930 Downloading [=======> ] 24.87MB/159.1MB 17:02:45 31e352740f53 Pull complete 17:02:45 31e352740f53 Pull complete 17:02:45 31e352740f53 Pull complete 17:02:45 31e352740f53 Pull complete 17:02:45 31e352740f53 Pull complete 17:02:45 a40760cd2625 Downloading [=> ] 2.702MB/84.46MB 17:02:45 ecc4de98d537 Downloading [=================> ] 26.49MB/73.93MB 17:02:45 ecc4de98d537 Downloading [=================> ] 26.49MB/73.93MB 17:02:45 ecc4de98d537 Downloading [=================> ] 26.49MB/73.93MB 17:02:45 ecc4de98d537 Downloading [=================> ] 26.49MB/73.93MB 17:02:45 56f27190e824 Pulling fs layer 17:02:45 8e70b9b9b078 Pulling fs layer 17:02:45 732c9ebb730c Pulling fs layer 17:02:45 ed746366f1b8 Pulling fs layer 17:02:45 10894799ccd9 Pulling fs layer 17:02:45 8d377259558c Pulling fs layer 17:02:45 e7688095d1e6 Pulling fs layer 17:02:45 8eab815b3593 Pulling fs layer 17:02:45 00ded6dd259e Pulling fs layer 17:02:45 296f622c8150 Pulling fs layer 17:02:45 4ee3050cff6b Pulling fs layer 17:02:45 98acab318002 Pulling fs layer 17:02:45 878348106a95 Pulling fs layer 17:02:45 ed746366f1b8 Waiting 17:02:45 10894799ccd9 Waiting 17:02:45 8d377259558c Waiting 17:02:45 e7688095d1e6 Waiting 17:02:45 8eab815b3593 Waiting 17:02:45 00ded6dd259e Waiting 17:02:45 8e70b9b9b078 Waiting 17:02:45 296f622c8150 Waiting 17:02:45 56f27190e824 Waiting 17:02:45 4ee3050cff6b Waiting 17:02:45 732c9ebb730c Waiting 17:02:45 878348106a95 Waiting 17:02:45 98acab318002 Waiting 17:02:45 f270a5fd7930 Downloading [=========> ] 29.74MB/159.1MB 17:02:45 9fa9226be034 Pulling fs layer 17:02:45 1617e25568b2 Pulling fs layer 17:02:45 25b95a09a872 Pulling fs layer 17:02:45 9010eb24e726 Pulling fs layer 17:02:45 faa5b6876931 Pulling fs layer 17:02:45 7773e9699356 Pulling fs layer 17:02:45 0f68bbe907b1 Pulling fs layer 17:02:45 4357144b1367 Pulling fs layer 17:02:45 49f3e8dc63fd Pulling fs layer 17:02:45 7c2a431bc9c9 Pulling fs layer 17:02:45 300fd05d2cfc Pulling fs layer 17:02:45 5ec9e969968e Pulling fs layer 17:02:45 faa5b6876931 Waiting 17:02:45 7773e9699356 Waiting 17:02:45 0f68bbe907b1 Waiting 17:02:45 4357144b1367 Waiting 17:02:45 49f3e8dc63fd Waiting 17:02:45 7c2a431bc9c9 Waiting 17:02:45 300fd05d2cfc Waiting 17:02:45 5ec9e969968e Waiting 17:02:45 9fa9226be034 Waiting 17:02:45 1617e25568b2 Waiting 17:02:45 25b95a09a872 Waiting 17:02:45 9010eb24e726 Waiting 17:02:45 56f27190e824 Pulling fs layer 17:02:45 8e70b9b9b078 Pulling fs layer 17:02:45 732c9ebb730c Pulling fs layer 17:02:45 ed746366f1b8 Pulling fs layer 17:02:45 10894799ccd9 Pulling fs layer 17:02:45 8d377259558c Pulling fs layer 17:02:45 e7688095d1e6 Pulling fs layer 17:02:45 8eab815b3593 Pulling fs layer 17:02:45 00ded6dd259e Pulling fs layer 17:02:45 296f622c8150 Pulling fs layer 17:02:45 4ee3050cff6b Pulling fs layer 17:02:45 519f42193ec8 Pulling fs layer 17:02:45 5df3538dc51e Pulling fs layer 17:02:45 296f622c8150 Waiting 17:02:45 4ee3050cff6b Waiting 17:02:45 ed746366f1b8 Waiting 17:02:45 519f42193ec8 Waiting 17:02:45 8e70b9b9b078 Waiting 17:02:45 56f27190e824 Waiting 17:02:45 732c9ebb730c Waiting 17:02:45 8d377259558c Waiting 17:02:45 10894799ccd9 Waiting 17:02:45 e7688095d1e6 Waiting 17:02:45 8eab815b3593 Waiting 17:02:45 5df3538dc51e Waiting 17:02:45 00ded6dd259e Waiting 17:02:45 a40760cd2625 Downloading [=====> ] 9.731MB/84.46MB 17:02:45 43c4264eed91 Pulling fs layer 17:02:45 6d083c262ef7 Pulling fs layer 17:02:45 61ad5fa1174a Pulling fs layer 17:02:45 faa25921ada9 Pulling fs layer 17:02:45 ee380b05b513 Pulling fs layer 17:02:45 1f925eb55205 Pulling fs layer 17:02:45 90075272e866 Pulling fs layer 17:02:45 915d19682ac9 Pulling fs layer 17:02:45 2766b5007f5a Pulling fs layer 17:02:45 ee380b05b513 Waiting 17:02:45 c077747462c3 Pulling fs layer 17:02:45 43c4264eed91 Waiting 17:02:45 1f925eb55205 Waiting 17:02:45 90075272e866 Waiting 17:02:45 faa25921ada9 Waiting 17:02:45 915d19682ac9 Waiting 17:02:45 c077747462c3 Waiting 17:02:45 2766b5007f5a Waiting 17:02:45 61ad5fa1174a Waiting 17:02:45 6d083c262ef7 Waiting 17:02:45 ecc4de98d537 Downloading [=============================> ] 43.79MB/73.93MB 17:02:45 ecc4de98d537 Downloading [=============================> ] 43.79MB/73.93MB 17:02:45 ecc4de98d537 Downloading [=============================> ] 43.79MB/73.93MB 17:02:45 ecc4de98d537 Downloading [=============================> ] 43.79MB/73.93MB 17:02:45 f270a5fd7930 Downloading [==============> ] 46.5MB/159.1MB 17:02:45 a40760cd2625 Downloading [==========> ] 18.38MB/84.46MB 17:02:45 ecc4de98d537 Downloading [===========================================> ] 63.8MB/73.93MB 17:02:45 ecc4de98d537 Downloading [===========================================> ] 63.8MB/73.93MB 17:02:45 ecc4de98d537 Downloading [===========================================> ] 63.8MB/73.93MB 17:02:45 ecc4de98d537 Downloading [===========================================> ] 63.8MB/73.93MB 17:02:45 f270a5fd7930 Downloading [====================> ] 65.42MB/159.1MB 17:02:45 ecc4de98d537 Verifying Checksum 17:02:45 ecc4de98d537 Download complete 17:02:45 ecc4de98d537 Verifying Checksum 17:02:45 ecc4de98d537 Download complete 17:02:45 ecc4de98d537 Verifying Checksum 17:02:45 ecc4de98d537 Download complete 17:02:45 ecc4de98d537 Verifying Checksum 17:02:45 ecc4de98d537 Download complete 17:02:45 114f99593bd8 Downloading [==================================================>] 1.119kB/1.119kB 17:02:45 114f99593bd8 Verifying Checksum 17:02:45 114f99593bd8 Download complete 17:02:45 a40760cd2625 Downloading [==================> ] 30.82MB/84.46MB 17:02:46 1fe734c5fee3 Downloading [> ] 343kB/32.94MB 17:02:46 f270a5fd7930 Downloading [==========================> ] 84.34MB/159.1MB 17:02:46 ecc4de98d537 Extracting [> ] 557.1kB/73.93MB 17:02:46 ecc4de98d537 Extracting [> ] 557.1kB/73.93MB 17:02:46 ecc4de98d537 Extracting [> ] 557.1kB/73.93MB 17:02:46 ecc4de98d537 Extracting [> ] 557.1kB/73.93MB 17:02:46 a40760cd2625 Downloading [==========================> ] 44.87MB/84.46MB 17:02:46 f270a5fd7930 Downloading [===============================> ] 101.6MB/159.1MB 17:02:46 1fe734c5fee3 Downloading [=====> ] 3.784MB/32.94MB 17:02:46 ecc4de98d537 Extracting [===> ] 4.456MB/73.93MB 17:02:46 ecc4de98d537 Extracting [===> ] 4.456MB/73.93MB 17:02:46 ecc4de98d537 Extracting [===> ] 4.456MB/73.93MB 17:02:46 ecc4de98d537 Extracting [===> ] 4.456MB/73.93MB 17:02:46 a40760cd2625 Downloading [======================================> ] 65.42MB/84.46MB 17:02:46 f270a5fd7930 Downloading [=====================================> ] 120MB/159.1MB 17:02:46 1fe734c5fee3 Downloading [==================> ] 12.04MB/32.94MB 17:02:46 ecc4de98d537 Extracting [=======> ] 10.58MB/73.93MB 17:02:46 ecc4de98d537 Extracting [=======> ] 10.58MB/73.93MB 17:02:46 ecc4de98d537 Extracting [=======> ] 10.58MB/73.93MB 17:02:46 ecc4de98d537 Extracting [=======> ] 10.58MB/73.93MB 17:02:46 a40760cd2625 Downloading [================================================> ] 82.72MB/84.46MB 17:02:46 a40760cd2625 Verifying Checksum 17:02:46 a40760cd2625 Download complete 17:02:46 c8e6f0452a8e Downloading [==================================================>] 1.076kB/1.076kB 17:02:46 c8e6f0452a8e Verifying Checksum 17:02:46 c8e6f0452a8e Download complete 17:02:46 f270a5fd7930 Downloading [===========================================> ] 139MB/159.1MB 17:02:46 0143f8517101 Downloading [============================> ] 3.003kB/5.324kB 17:02:46 0143f8517101 Downloading [==================================================>] 5.324kB/5.324kB 17:02:46 0143f8517101 Verifying Checksum 17:02:46 0143f8517101 Download complete 17:02:46 1fe734c5fee3 Downloading [======================================> ] 25.12MB/32.94MB 17:02:46 ee69cc1a77e2 Downloading [============================> ] 3.003kB/5.312kB 17:02:46 ee69cc1a77e2 Downloading [==================================================>] 5.312kB/5.312kB 17:02:46 ee69cc1a77e2 Verifying Checksum 17:02:46 ee69cc1a77e2 Download complete 17:02:46 ecc4de98d537 Extracting [===========> ] 16.71MB/73.93MB 17:02:46 ecc4de98d537 Extracting [===========> ] 16.71MB/73.93MB 17:02:46 ecc4de98d537 Extracting [===========> ] 16.71MB/73.93MB 17:02:46 ecc4de98d537 Extracting [===========> ] 16.71MB/73.93MB 17:02:46 81667b400b57 Downloading [==================================================>] 1.034kB/1.034kB 17:02:46 81667b400b57 Download complete 17:02:46 1fe734c5fee3 Verifying Checksum 17:02:46 1fe734c5fee3 Download complete 17:02:46 ec3b6d0cc414 Download complete 17:02:46 89d6e2ec6372 Downloading [==========> ] 3.002kB/13.79kB 17:02:46 89d6e2ec6372 Downloading [==================================================>] 13.79kB/13.79kB 17:02:46 89d6e2ec6372 Verifying Checksum 17:02:46 89d6e2ec6372 Download complete 17:02:46 a8d3998ab21c Downloading [==========> ] 3.002kB/13.9kB 17:02:46 a8d3998ab21c Download complete 17:02:46 80096f8bb25e Downloading [==================================================>] 2.238kB/2.238kB 17:02:46 80096f8bb25e Verifying Checksum 17:02:46 80096f8bb25e Download complete 17:02:46 f270a5fd7930 Downloading [=================================================> ] 157.3MB/159.1MB 17:02:46 cbd359ebc87d Downloading [==================================================>] 2.23kB/2.23kB 17:02:46 cbd359ebc87d Verifying Checksum 17:02:46 cbd359ebc87d Download complete 17:02:46 f270a5fd7930 Verifying Checksum 17:02:46 f270a5fd7930 Download complete 17:02:46 929241f867bb Downloading [==================================================>] 92B/92B 17:02:46 929241f867bb Download complete 17:02:46 bc8105c6553b Downloading [=> ] 3.002kB/84.13kB 17:02:46 bc8105c6553b Downloading [==================================================>] 84.13kB/84.13kB 17:02:46 bc8105c6553b Verifying Checksum 17:02:46 bc8105c6553b Download complete 17:02:46 ad1782e4d1ef Downloading [> ] 539.6kB/180.4MB 17:02:46 37728a7352e6 Downloading [==================================================>] 92B/92B 17:02:46 37728a7352e6 Verifying Checksum 17:02:46 37728a7352e6 Download complete 17:02:46 3f40c7aa46a6 Downloading [==================================================>] 302B/302B 17:02:46 3f40c7aa46a6 Verifying Checksum 17:02:46 3f40c7aa46a6 Download complete 17:02:46 ecc4de98d537 Extracting [================> ] 23.95MB/73.93MB 17:02:46 ecc4de98d537 Extracting [================> ] 23.95MB/73.93MB 17:02:46 ecc4de98d537 Extracting [================> ] 23.95MB/73.93MB 17:02:46 ecc4de98d537 Extracting [================> ] 23.95MB/73.93MB 17:02:46 bda0b253c68f Download complete 17:02:46 353af139d39e Downloading [> ] 539.6kB/246.5MB 17:02:46 b9357b55a7a5 Downloading [=> ] 3.001kB/127kB 17:02:46 b9357b55a7a5 Downloading [==================================================>] 127kB/127kB 17:02:46 b9357b55a7a5 Verifying Checksum 17:02:46 b9357b55a7a5 Download complete 17:02:46 4c3047628e17 Downloading [==================================================>] 1.324kB/1.324kB 17:02:46 4c3047628e17 Verifying Checksum 17:02:46 4c3047628e17 Download complete 17:02:46 6cf350721225 Downloading [> ] 539.6kB/98.32MB 17:02:46 ad1782e4d1ef Downloading [===> ] 11.35MB/180.4MB 17:02:46 ecc4de98d537 Extracting [====================> ] 30.64MB/73.93MB 17:02:46 ecc4de98d537 Extracting [====================> ] 30.64MB/73.93MB 17:02:46 ecc4de98d537 Extracting [====================> ] 30.64MB/73.93MB 17:02:46 ecc4de98d537 Extracting [====================> ] 30.64MB/73.93MB 17:02:46 353af139d39e Downloading [=> ] 8.109MB/246.5MB 17:02:46 6cf350721225 Downloading [===> ] 5.946MB/98.32MB 17:02:46 ad1782e4d1ef Downloading [========> ] 29.2MB/180.4MB 17:02:46 ecc4de98d537 Extracting [========================> ] 36.21MB/73.93MB 17:02:46 ecc4de98d537 Extracting [========================> ] 36.21MB/73.93MB 17:02:46 ecc4de98d537 Extracting [========================> ] 36.21MB/73.93MB 17:02:46 ecc4de98d537 Extracting [========================> ] 36.21MB/73.93MB 17:02:46 353af139d39e Downloading [===> ] 16.22MB/246.5MB 17:02:46 6cf350721225 Downloading [=====> ] 11.35MB/98.32MB 17:02:46 ad1782e4d1ef Downloading [=============> ] 48.66MB/180.4MB 17:02:46 ecc4de98d537 Extracting [============================> ] 42.34MB/73.93MB 17:02:46 ecc4de98d537 Extracting [============================> ] 42.34MB/73.93MB 17:02:46 ecc4de98d537 Extracting [============================> ] 42.34MB/73.93MB 17:02:46 ecc4de98d537 Extracting [============================> ] 42.34MB/73.93MB 17:02:46 353af139d39e Downloading [====> ] 24.33MB/246.5MB 17:02:46 6cf350721225 Downloading [========> ] 17.3MB/98.32MB 17:02:46 ad1782e4d1ef Downloading [==================> ] 67.58MB/180.4MB 17:02:46 ecc4de98d537 Extracting [================================> ] 47.91MB/73.93MB 17:02:46 ecc4de98d537 Extracting [================================> ] 47.91MB/73.93MB 17:02:46 ecc4de98d537 Extracting [================================> ] 47.91MB/73.93MB 17:02:46 ecc4de98d537 Extracting [================================> ] 47.91MB/73.93MB 17:02:46 353af139d39e Downloading [======> ] 32.98MB/246.5MB 17:02:46 6cf350721225 Downloading [============> ] 23.79MB/98.32MB 17:02:46 ad1782e4d1ef Downloading [=======================> ] 86.51MB/180.4MB 17:02:46 ecc4de98d537 Extracting [====================================> ] 54.03MB/73.93MB 17:02:46 ecc4de98d537 Extracting [====================================> ] 54.03MB/73.93MB 17:02:46 ecc4de98d537 Extracting [====================================> ] 54.03MB/73.93MB 17:02:46 ecc4de98d537 Extracting [====================================> ] 54.03MB/73.93MB 17:02:46 353af139d39e Downloading [========> ] 41.09MB/246.5MB 17:02:47 6cf350721225 Downloading [==============> ] 29.2MB/98.32MB 17:02:47 ad1782e4d1ef Downloading [=============================> ] 105.4MB/180.4MB 17:02:47 ecc4de98d537 Extracting [=======================================> ] 59.05MB/73.93MB 17:02:47 ecc4de98d537 Extracting [=======================================> ] 59.05MB/73.93MB 17:02:47 ecc4de98d537 Extracting [=======================================> ] 59.05MB/73.93MB 17:02:47 ecc4de98d537 Extracting [=======================================> ] 59.05MB/73.93MB 17:02:47 353af139d39e Downloading [==========> ] 49.74MB/246.5MB 17:02:47 6cf350721225 Downloading [=================> ] 34.6MB/98.32MB 17:02:47 ad1782e4d1ef Downloading [==================================> ] 123.3MB/180.4MB 17:02:47 ecc4de98d537 Extracting [=============================================> ] 66.85MB/73.93MB 17:02:47 ecc4de98d537 Extracting [=============================================> ] 66.85MB/73.93MB 17:02:47 ecc4de98d537 Extracting [=============================================> ] 66.85MB/73.93MB 17:02:47 ecc4de98d537 Extracting [=============================================> ] 66.85MB/73.93MB 17:02:47 353af139d39e Downloading [===========> ] 58.93MB/246.5MB 17:02:47 6cf350721225 Downloading [====================> ] 40.01MB/98.32MB 17:02:47 ad1782e4d1ef Downloading [=======================================> ] 142.2MB/180.4MB 17:02:47 353af139d39e Downloading [==============> ] 70.29MB/246.5MB 17:02:47 ecc4de98d537 Extracting [==================================================>] 73.93MB/73.93MB 17:02:47 ecc4de98d537 Extracting [==================================================>] 73.93MB/73.93MB 17:02:47 ecc4de98d537 Extracting [==================================================>] 73.93MB/73.93MB 17:02:47 ecc4de98d537 Extracting [==================================================>] 73.93MB/73.93MB 17:02:47 6cf350721225 Downloading [==========================> ] 51.9MB/98.32MB 17:02:47 ad1782e4d1ef Downloading [============================================> ] 161.7MB/180.4MB 17:02:47 ecc4de98d537 Pull complete 17:02:47 ecc4de98d537 Pull complete 17:02:47 ecc4de98d537 Pull complete 17:02:47 ecc4de98d537 Pull complete 17:02:47 bda0b253c68f Extracting [==================================================>] 292B/292B 17:02:47 145e9fcd3938 Extracting [==================================================>] 294B/294B 17:02:47 665dfb3388a1 Extracting [==================================================>] 303B/303B 17:02:47 665dfb3388a1 Extracting [==================================================>] 303B/303B 17:02:47 bda0b253c68f Extracting [==================================================>] 292B/292B 17:02:47 145e9fcd3938 Extracting [==================================================>] 294B/294B 17:02:47 353af139d39e Downloading [=================> ] 88.13MB/246.5MB 17:02:47 ad1782e4d1ef Downloading [=================================================> ] 179.5MB/180.4MB 17:02:47 ad1782e4d1ef Verifying Checksum 17:02:47 ad1782e4d1ef Download complete 17:02:47 6cf350721225 Downloading [==================================> ] 67.04MB/98.32MB 17:02:47 de723b4c7ed9 Downloading [==================================================>] 1.297kB/1.297kB 17:02:47 de723b4c7ed9 Verifying Checksum 17:02:47 de723b4c7ed9 Download complete 17:02:47 1fe734c5fee3 Extracting [> ] 360.4kB/32.94MB 17:02:47 353af139d39e Downloading [====================> ] 102.2MB/246.5MB 17:02:47 665dfb3388a1 Pull complete 17:02:47 bda0b253c68f Pull complete 17:02:47 b9357b55a7a5 Extracting [============> ] 32.77kB/127kB 17:02:47 b9357b55a7a5 Extracting [==================================================>] 127kB/127kB 17:02:47 b9357b55a7a5 Extracting [==================================================>] 127kB/127kB 17:02:47 145e9fcd3938 Pull complete 17:02:47 4be774fd73e2 Extracting [============> ] 32.77kB/127.4kB 17:02:47 10ac4908093d Downloading [> ] 310.2kB/30.43MB 17:02:47 4be774fd73e2 Extracting [==================================================>] 127.4kB/127.4kB 17:02:47 4be774fd73e2 Extracting [==================================================>] 127.4kB/127.4kB 17:02:47 6cf350721225 Downloading [=======================================> ] 77.86MB/98.32MB 17:02:47 f270a5fd7930 Extracting [> ] 557.1kB/159.1MB 17:02:47 ad1782e4d1ef Extracting [> ] 557.1kB/180.4MB 17:02:47 353af139d39e Downloading [======================> ] 110.3MB/246.5MB 17:02:47 10ac4908093d Downloading [========> ] 5.291MB/30.43MB 17:02:47 6cf350721225 Downloading [===============================================> ] 93.54MB/98.32MB 17:02:47 f270a5fd7930 Extracting [==> ] 7.799MB/159.1MB 17:02:47 6cf350721225 Verifying Checksum 17:02:47 6cf350721225 Download complete 17:02:47 1fe734c5fee3 Extracting [===> ] 2.163MB/32.94MB 17:02:47 353af139d39e Downloading [=========================> ] 127.1MB/246.5MB 17:02:47 44779101e748 Downloading [==================================================>] 1.744kB/1.744kB 17:02:47 44779101e748 Verifying Checksum 17:02:47 44779101e748 Download complete 17:02:47 ad1782e4d1ef Extracting [=> ] 4.456MB/180.4MB 17:02:47 10ac4908093d Downloading [===================> ] 11.83MB/30.43MB 17:02:47 a721db3e3f3d Downloading [> ] 64.45kB/5.526MB 17:02:47 f270a5fd7930 Extracting [======> ] 21.73MB/159.1MB 17:02:47 1fe734c5fee3 Extracting [========> ] 5.407MB/32.94MB 17:02:47 353af139d39e Downloading [=============================> ] 145.4MB/246.5MB 17:02:47 ad1782e4d1ef Extracting [===> ] 12.26MB/180.4MB 17:02:47 10ac4908093d Downloading [=======================================> ] 24.28MB/30.43MB 17:02:47 a721db3e3f3d Downloading [=====================> ] 2.358MB/5.526MB 17:02:47 f270a5fd7930 Extracting [=========> ] 31.75MB/159.1MB 17:02:47 10ac4908093d Download complete 17:02:47 1850a929b84a Downloading [==================================================>] 149B/149B 17:02:47 ad1782e4d1ef Extracting [=====> ] 18.38MB/180.4MB 17:02:47 1850a929b84a Verifying Checksum 17:02:47 1850a929b84a Download complete 17:02:47 353af139d39e Downloading [================================> ] 159.5MB/246.5MB 17:02:47 1fe734c5fee3 Extracting [============> ] 7.93MB/32.94MB 17:02:47 a721db3e3f3d Downloading [========================================> ] 4.455MB/5.526MB 17:02:47 397a918c7da3 Downloading [==================================================>] 327B/327B 17:02:47 397a918c7da3 Verifying Checksum 17:02:47 397a918c7da3 Download complete 17:02:47 a721db3e3f3d Verifying Checksum 17:02:47 a721db3e3f3d Download complete 17:02:48 634de6c90876 Downloading [===========================================> ] 3.011kB/3.49kB 17:02:48 634de6c90876 Downloading [==================================================>] 3.49kB/3.49kB 17:02:48 634de6c90876 Verifying Checksum 17:02:48 634de6c90876 Download complete 17:02:48 806be17e856d Downloading [> ] 539.6kB/89.72MB 17:02:48 f270a5fd7930 Extracting [===========> ] 36.77MB/159.1MB 17:02:48 cd00854cfb1a Downloading [=====================> ] 3.011kB/6.971kB 17:02:48 cd00854cfb1a Downloading [==================================================>] 6.971kB/6.971kB 17:02:48 cd00854cfb1a Verifying Checksum 17:02:48 cd00854cfb1a Download complete 17:02:48 10ac4908093d Extracting [> ] 327.7kB/30.43MB 17:02:48 ad1782e4d1ef Extracting [=======> ] 27.85MB/180.4MB 17:02:48 353af139d39e Downloading [==================================> ] 171.9MB/246.5MB 17:02:48 1fe734c5fee3 Extracting [===============> ] 10.45MB/32.94MB 17:02:48 806be17e856d Downloading [====> ] 7.568MB/89.72MB 17:02:48 f270a5fd7930 Extracting [=============> ] 44.01MB/159.1MB 17:02:48 56f27190e824 Downloading [> ] 380.1kB/37.11MB 17:02:48 56f27190e824 Downloading [> ] 380.1kB/37.11MB 17:02:48 10ac4908093d Extracting [=====> ] 3.277MB/30.43MB 17:02:48 353af139d39e Downloading [=====================================> ] 186.5MB/246.5MB 17:02:48 ad1782e4d1ef Extracting [=========> ] 34.54MB/180.4MB 17:02:48 1fe734c5fee3 Extracting [====================> ] 13.34MB/32.94MB 17:02:48 806be17e856d Downloading [========> ] 14.6MB/89.72MB 17:02:48 f270a5fd7930 Extracting [================> ] 52.36MB/159.1MB 17:02:48 4be774fd73e2 Pull complete 17:02:48 b9357b55a7a5 Pull complete 17:02:48 71f834c33815 Extracting [==================================================>] 1.147kB/1.147kB 17:02:48 71f834c33815 Extracting [==================================================>] 1.147kB/1.147kB 17:02:48 56f27190e824 Downloading [===========> ] 8.301MB/37.11MB 17:02:48 56f27190e824 Downloading [===========> ] 8.301MB/37.11MB 17:02:48 4c3047628e17 Extracting [==================================================>] 1.324kB/1.324kB 17:02:48 4c3047628e17 Extracting [==================================================>] 1.324kB/1.324kB 17:02:48 10ac4908093d Extracting [========> ] 4.915MB/30.43MB 17:02:48 353af139d39e Downloading [========================================> ] 199MB/246.5MB 17:02:48 ad1782e4d1ef Extracting [===========> ] 41.78MB/180.4MB 17:02:48 1fe734c5fee3 Extracting [======================> ] 14.78MB/32.94MB 17:02:48 806be17e856d Downloading [=============> ] 24.87MB/89.72MB 17:02:48 f270a5fd7930 Extracting [==================> ] 58.49MB/159.1MB 17:02:48 10ac4908093d Extracting [=============> ] 8.192MB/30.43MB 17:02:48 353af139d39e Downloading [==========================================> ] 209.8MB/246.5MB 17:02:48 56f27190e824 Downloading [======================> ] 16.97MB/37.11MB 17:02:48 56f27190e824 Downloading [======================> ] 16.97MB/37.11MB 17:02:48 ad1782e4d1ef Extracting [=============> ] 47.35MB/180.4MB 17:02:48 1fe734c5fee3 Extracting [==========================> ] 17.3MB/32.94MB 17:02:48 806be17e856d Downloading [======================> ] 40.01MB/89.72MB 17:02:48 f270a5fd7930 Extracting [====================> ] 66.29MB/159.1MB 17:02:48 71f834c33815 Pull complete 17:02:48 4c3047628e17 Pull complete 17:02:48 10ac4908093d Extracting [=================> ] 10.49MB/30.43MB 17:02:48 353af139d39e Downloading [=============================================> ] 222.2MB/246.5MB 17:02:48 56f27190e824 Downloading [=========================================> ] 30.6MB/37.11MB 17:02:48 56f27190e824 Downloading [=========================================> ] 30.6MB/37.11MB 17:02:48 ad1782e4d1ef Extracting [===============> ] 56.82MB/180.4MB 17:02:48 1fe734c5fee3 Extracting [==============================> ] 19.82MB/32.94MB 17:02:48 56f27190e824 Verifying Checksum 17:02:48 56f27190e824 Verifying Checksum 17:02:48 56f27190e824 Download complete 17:02:48 56f27190e824 Download complete 17:02:48 806be17e856d Downloading [=============================> ] 52.98MB/89.72MB 17:02:48 f270a5fd7930 Extracting [=======================> ] 75.2MB/159.1MB 17:02:48 a40760cd2625 Extracting [> ] 557.1kB/84.46MB 17:02:48 353af139d39e Downloading [===============================================> ] 236.3MB/246.5MB 17:02:48 10ac4908093d Extracting [====================> ] 12.45MB/30.43MB 17:02:48 ad1782e4d1ef Extracting [=================> ] 62.39MB/180.4MB 17:02:48 1fe734c5fee3 Extracting [================================> ] 21.63MB/32.94MB 17:02:48 6cf350721225 Extracting [> ] 557.1kB/98.32MB 17:02:48 806be17e856d Downloading [=====================================> ] 66.5MB/89.72MB 17:02:48 8e70b9b9b078 Downloading [> ] 527.6kB/272.7MB 17:02:48 8e70b9b9b078 Downloading [> ] 527.6kB/272.7MB 17:02:48 353af139d39e Verifying Checksum 17:02:48 353af139d39e Download complete 17:02:48 f270a5fd7930 Extracting [=========================> ] 81.33MB/159.1MB 17:02:48 56f27190e824 Extracting [> ] 393.2kB/37.11MB 17:02:48 56f27190e824 Extracting [> ] 393.2kB/37.11MB 17:02:48 a40760cd2625 Extracting [===> ] 6.685MB/84.46MB 17:02:48 10ac4908093d Extracting [========================> ] 14.75MB/30.43MB 17:02:48 ad1782e4d1ef Extracting [==================> ] 67.96MB/180.4MB 17:02:48 6cf350721225 Extracting [===> ] 6.685MB/98.32MB 17:02:48 1fe734c5fee3 Extracting [=================================> ] 22.35MB/32.94MB 17:02:48 806be17e856d Downloading [==============================================> ] 82.72MB/89.72MB 17:02:48 8e70b9b9b078 Downloading [=> ] 9.632MB/272.7MB 17:02:48 8e70b9b9b078 Downloading [=> ] 9.632MB/272.7MB 17:02:48 f270a5fd7930 Extracting [===========================> ] 86.34MB/159.1MB 17:02:48 732c9ebb730c Downloading [================================> ] 719B/1.111kB 17:02:48 732c9ebb730c Downloading [================================> ] 719B/1.111kB 17:02:48 732c9ebb730c Downloading [==================================================>] 1.111kB/1.111kB 17:02:48 732c9ebb730c Downloading [==================================================>] 1.111kB/1.111kB 17:02:48 732c9ebb730c Verifying Checksum 17:02:48 732c9ebb730c Download complete 17:02:48 732c9ebb730c Verifying Checksum 17:02:48 732c9ebb730c Download complete 17:02:48 a40760cd2625 Extracting [=======> ] 12.26MB/84.46MB 17:02:48 10ac4908093d Extracting [=============================> ] 17.69MB/30.43MB 17:02:48 56f27190e824 Extracting [====> ] 3.539MB/37.11MB 17:02:48 56f27190e824 Extracting [====> ] 3.539MB/37.11MB 17:02:48 ad1782e4d1ef Extracting [====================> ] 74.65MB/180.4MB 17:02:48 806be17e856d Verifying Checksum 17:02:48 806be17e856d Download complete 17:02:48 6cf350721225 Extracting [=====> ] 11.7MB/98.32MB 17:02:48 f270a5fd7930 Extracting [============================> ] 89.13MB/159.1MB 17:02:48 ed746366f1b8 Downloading [> ] 85.77kB/8.378MB 17:02:48 ed746366f1b8 Downloading [> ] 85.77kB/8.378MB 17:02:48 8e70b9b9b078 Downloading [===> ] 20.35MB/272.7MB 17:02:48 8e70b9b9b078 Downloading [===> ] 20.35MB/272.7MB 17:02:48 a40760cd2625 Extracting [==========> ] 17.27MB/84.46MB 17:02:48 1fe734c5fee3 Extracting [====================================> ] 23.79MB/32.94MB 17:02:48 10ac4908093d Extracting [=================================> ] 20.64MB/30.43MB 17:02:48 10894799ccd9 Downloading [=> ] 718B/21.28kB 17:02:48 10894799ccd9 Downloading [=> ] 718B/21.28kB 17:02:48 10894799ccd9 Verifying Checksum 17:02:48 10894799ccd9 Download complete 17:02:48 10894799ccd9 Verifying Checksum 17:02:48 10894799ccd9 Download complete 17:02:48 56f27190e824 Extracting [========> ] 6.291MB/37.11MB 17:02:48 56f27190e824 Extracting [========> ] 6.291MB/37.11MB 17:02:48 ad1782e4d1ef Extracting [======================> ] 81.33MB/180.4MB 17:02:48 6cf350721225 Extracting [========> ] 17.27MB/98.32MB 17:02:48 f270a5fd7930 Extracting [=============================> ] 94.14MB/159.1MB 17:02:48 ed746366f1b8 Downloading [==============================================> ] 7.817MB/8.378MB 17:02:48 ed746366f1b8 Downloading [==============================================> ] 7.817MB/8.378MB 17:02:48 ed746366f1b8 Verifying Checksum 17:02:48 ed746366f1b8 Download complete 17:02:48 ed746366f1b8 Verifying Checksum 17:02:48 ed746366f1b8 Download complete 17:02:48 8e70b9b9b078 Downloading [=====> ] 27.85MB/272.7MB 17:02:48 8e70b9b9b078 Downloading [=====> ] 27.85MB/272.7MB 17:02:48 10ac4908093d Extracting [=====================================> ] 22.94MB/30.43MB 17:02:49 1fe734c5fee3 Extracting [=====================================> ] 24.87MB/32.94MB 17:02:49 ad1782e4d1ef Extracting [=======================> ] 85.79MB/180.4MB 17:02:49 a40760cd2625 Extracting [============> ] 21.73MB/84.46MB 17:02:49 6cf350721225 Extracting [============> ] 24.51MB/98.32MB 17:02:49 8d377259558c Downloading [> ] 437.5kB/43.24MB 17:02:49 8d377259558c Downloading [> ] 437.5kB/43.24MB 17:02:49 56f27190e824 Extracting [=============> ] 9.83MB/37.11MB 17:02:49 56f27190e824 Extracting [=============> ] 9.83MB/37.11MB 17:02:49 f270a5fd7930 Extracting [==============================> ] 98.6MB/159.1MB 17:02:49 e7688095d1e6 Downloading [================================> ] 719B/1.106kB 17:02:49 e7688095d1e6 Downloading [================================> ] 719B/1.106kB 17:02:49 e7688095d1e6 Downloading [==================================================>] 1.106kB/1.106kB 17:02:49 e7688095d1e6 Downloading [==================================================>] 1.106kB/1.106kB 17:02:49 e7688095d1e6 Verifying Checksum 17:02:49 e7688095d1e6 Download complete 17:02:49 e7688095d1e6 Verifying Checksum 17:02:49 e7688095d1e6 Download complete 17:02:49 8e70b9b9b078 Downloading [======> ] 35.85MB/272.7MB 17:02:49 8e70b9b9b078 Downloading [======> ] 35.85MB/272.7MB 17:02:49 10ac4908093d Extracting [========================================> ] 24.9MB/30.43MB 17:02:49 1fe734c5fee3 Extracting [========================================> ] 26.67MB/32.94MB 17:02:49 a40760cd2625 Extracting [===============> ] 26.74MB/84.46MB 17:02:49 ad1782e4d1ef Extracting [========================> ] 89.69MB/180.4MB 17:02:49 6cf350721225 Extracting [===============> ] 30.08MB/98.32MB 17:02:49 8d377259558c Downloading [==========> ] 8.825MB/43.24MB 17:02:49 8d377259558c Downloading [==========> ] 8.825MB/43.24MB 17:02:49 56f27190e824 Extracting [================> ] 12.58MB/37.11MB 17:02:49 56f27190e824 Extracting [================> ] 12.58MB/37.11MB 17:02:49 f270a5fd7930 Extracting [================================> ] 103.6MB/159.1MB 17:02:49 8e70b9b9b078 Downloading [========> ] 45.52MB/272.7MB 17:02:49 8e70b9b9b078 Downloading [========> ] 45.52MB/272.7MB 17:02:49 8eab815b3593 Downloading [==========================================> ] 721B/853B 17:02:49 8eab815b3593 Downloading [==========================================> ] 721B/853B 17:02:49 8eab815b3593 Downloading [==================================================>] 853B/853B 17:02:49 8eab815b3593 Downloading [==================================================>] 853B/853B 17:02:49 8eab815b3593 Verifying Checksum 17:02:49 8eab815b3593 Download complete 17:02:49 8eab815b3593 Verifying Checksum 17:02:49 8eab815b3593 Download complete 17:02:49 ad1782e4d1ef Extracting [=========================> ] 93.03MB/180.4MB 17:02:49 a40760cd2625 Extracting [==================> ] 31.75MB/84.46MB 17:02:49 6cf350721225 Extracting [==================> ] 35.65MB/98.32MB 17:02:49 8d377259558c Downloading [====================> ] 17.66MB/43.24MB 17:02:49 8d377259558c Downloading [====================> ] 17.66MB/43.24MB 17:02:49 1fe734c5fee3 Extracting [=========================================> ] 27.39MB/32.94MB 17:02:49 f270a5fd7930 Extracting [==================================> ] 110.3MB/159.1MB 17:02:49 56f27190e824 Extracting [===================> ] 14.55MB/37.11MB 17:02:49 56f27190e824 Extracting [===================> ] 14.55MB/37.11MB 17:02:49 00ded6dd259e Downloading [==================================================>] 98B/98B 17:02:49 00ded6dd259e Downloading [==================================================>] 98B/98B 17:02:49 00ded6dd259e Verifying Checksum 17:02:49 00ded6dd259e Download complete 17:02:49 00ded6dd259e Verifying Checksum 17:02:49 00ded6dd259e Download complete 17:02:49 8e70b9b9b078 Downloading [==========> ] 54.62MB/272.7MB 17:02:49 8e70b9b9b078 Downloading [==========> ] 54.62MB/272.7MB 17:02:49 10ac4908093d Extracting [===========================================> ] 26.54MB/30.43MB 17:02:49 a40760cd2625 Extracting [=====================> ] 36.77MB/84.46MB 17:02:49 ad1782e4d1ef Extracting [==========================> ] 95.26MB/180.4MB 17:02:49 8d377259558c Downloading [=================================> ] 28.69MB/43.24MB 17:02:49 8d377259558c Downloading [=================================> ] 28.69MB/43.24MB 17:02:49 6cf350721225 Extracting [====================> ] 40.67MB/98.32MB 17:02:49 296f622c8150 Downloading [==================================================>] 172B/172B 17:02:49 296f622c8150 Downloading [==================================================>] 172B/172B 17:02:49 296f622c8150 Verifying Checksum 17:02:49 296f622c8150 Verifying Checksum 17:02:49 296f622c8150 Download complete 17:02:49 296f622c8150 Download complete 17:02:49 f270a5fd7930 Extracting [====================================> ] 115.3MB/159.1MB 17:02:49 8e70b9b9b078 Downloading [============> ] 65.91MB/272.7MB 17:02:49 8e70b9b9b078 Downloading [============> ] 65.91MB/272.7MB 17:02:49 1fe734c5fee3 Extracting [=============================================> ] 30.28MB/32.94MB 17:02:49 56f27190e824 Extracting [======================> ] 16.91MB/37.11MB 17:02:49 56f27190e824 Extracting [======================> ] 16.91MB/37.11MB 17:02:49 10ac4908093d Extracting [==============================================> ] 28.51MB/30.43MB 17:02:49 a40760cd2625 Extracting [=========================> ] 42.89MB/84.46MB 17:02:49 ad1782e4d1ef Extracting [===========================> ] 98.04MB/180.4MB 17:02:49 8d377259558c Downloading [===============================================> ] 41.48MB/43.24MB 17:02:49 8d377259558c Downloading [===============================================> ] 41.48MB/43.24MB 17:02:49 6cf350721225 Extracting [=======================> ] 46.24MB/98.32MB 17:02:49 f270a5fd7930 Extracting [=====================================> ] 120.9MB/159.1MB 17:02:49 8d377259558c Verifying Checksum 17:02:49 8d377259558c Verifying Checksum 17:02:49 8d377259558c Download complete 17:02:49 8d377259558c Download complete 17:02:49 4ee3050cff6b Downloading [> ] 2.738kB/230.6kB 17:02:49 4ee3050cff6b Downloading [> ] 2.738kB/230.6kB 17:02:49 8e70b9b9b078 Downloading [=============> ] 75.56MB/272.7MB 17:02:49 8e70b9b9b078 Downloading [=============> ] 75.56MB/272.7MB 17:02:49 1fe734c5fee3 Extracting [===============================================> ] 31MB/32.94MB 17:02:49 56f27190e824 Extracting [=========================> ] 19.27MB/37.11MB 17:02:49 56f27190e824 Extracting [=========================> ] 19.27MB/37.11MB 17:02:49 4ee3050cff6b Verifying Checksum 17:02:49 4ee3050cff6b Verifying Checksum 17:02:49 4ee3050cff6b Download complete 17:02:49 4ee3050cff6b Download complete 17:02:49 a40760cd2625 Extracting [============================> ] 48.46MB/84.46MB 17:02:49 1fe734c5fee3 Extracting [==================================================>] 32.94MB/32.94MB 17:02:49 6cf350721225 Extracting [=========================> ] 50.69MB/98.32MB 17:02:49 ad1782e4d1ef Extracting [===========================> ] 100.3MB/180.4MB 17:02:49 10ac4908093d Extracting [=================================================> ] 30.15MB/30.43MB 17:02:49 98acab318002 Downloading [> ] 535.8kB/121.9MB 17:02:49 f270a5fd7930 Extracting [=======================================> ] 125.3MB/159.1MB 17:02:49 8e70b9b9b078 Downloading [===============> ] 85.72MB/272.7MB 17:02:49 8e70b9b9b078 Downloading [===============> ] 85.72MB/272.7MB 17:02:49 878348106a95 Downloading [==========> ] 719B/3.447kB 17:02:49 878348106a95 Downloading [==================================================>] 3.447kB/3.447kB 17:02:49 878348106a95 Verifying Checksum 17:02:49 878348106a95 Download complete 17:02:49 56f27190e824 Extracting [==============================> ] 22.81MB/37.11MB 17:02:49 56f27190e824 Extracting [==============================> ] 22.81MB/37.11MB 17:02:49 9fa9226be034 Downloading [> ] 15.3kB/783kB 17:02:49 a40760cd2625 Extracting [================================> ] 54.59MB/84.46MB 17:02:49 8e70b9b9b078 Downloading [=================> ] 94.33MB/272.7MB 17:02:49 8e70b9b9b078 Downloading [=================> ] 94.33MB/272.7MB 17:02:49 f270a5fd7930 Extracting [========================================> ] 128.1MB/159.1MB 17:02:49 9fa9226be034 Downloading [==================================================>] 783kB/783kB 17:02:49 98acab318002 Downloading [==> ] 5.892MB/121.9MB 17:02:49 9fa9226be034 Verifying Checksum 17:02:49 9fa9226be034 Download complete 17:02:49 9fa9226be034 Extracting [==> ] 32.77kB/783kB 17:02:49 6cf350721225 Extracting [============================> ] 55.15MB/98.32MB 17:02:49 ad1782e4d1ef Extracting [============================> ] 104.2MB/180.4MB 17:02:49 56f27190e824 Extracting [================================> ] 23.99MB/37.11MB 17:02:49 56f27190e824 Extracting [================================> ] 23.99MB/37.11MB 17:02:49 1617e25568b2 Downloading [=> ] 15.3kB/480.9kB 17:02:49 a40760cd2625 Extracting [================================> ] 55.71MB/84.46MB 17:02:49 1617e25568b2 Downloading [==================================================>] 480.9kB/480.9kB 17:02:49 1617e25568b2 Verifying Checksum 17:02:49 1617e25568b2 Download complete 17:02:49 10ac4908093d Extracting [==================================================>] 30.43MB/30.43MB 17:02:49 1fe734c5fee3 Pull complete 17:02:49 c8e6f0452a8e Extracting [==================================================>] 1.076kB/1.076kB 17:02:49 8e70b9b9b078 Downloading [===================> ] 104MB/272.7MB 17:02:49 8e70b9b9b078 Downloading [===================> ] 104MB/272.7MB 17:02:49 25b95a09a872 Downloading [> ] 539.6kB/58.98MB 17:02:49 c8e6f0452a8e Extracting [==================================================>] 1.076kB/1.076kB 17:02:49 f270a5fd7930 Extracting [=========================================> ] 132MB/159.1MB 17:02:49 98acab318002 Downloading [======> ] 16.57MB/121.9MB 17:02:49 9fa9226be034 Extracting [=======================> ] 360.4kB/783kB 17:02:49 56f27190e824 Extracting [====================================> ] 26.74MB/37.11MB 17:02:49 56f27190e824 Extracting [====================================> ] 26.74MB/37.11MB 17:02:49 6cf350721225 Extracting [===============================> ] 62.39MB/98.32MB 17:02:49 9fa9226be034 Extracting [==================================================>] 783kB/783kB 17:02:49 a40760cd2625 Extracting [===================================> ] 59.6MB/84.46MB 17:02:49 ad1782e4d1ef Extracting [=============================> ] 106.4MB/180.4MB 17:02:49 10ac4908093d Pull complete 17:02:49 44779101e748 Extracting [==================================================>] 1.744kB/1.744kB 17:02:49 44779101e748 Extracting [==================================================>] 1.744kB/1.744kB 17:02:49 f270a5fd7930 Extracting [==========================================> ] 135.4MB/159.1MB 17:02:49 8e70b9b9b078 Downloading [=====================> ] 115.2MB/272.7MB 17:02:49 8e70b9b9b078 Downloading [=====================> ] 115.2MB/272.7MB 17:02:49 25b95a09a872 Downloading [=====> ] 6.487MB/58.98MB 17:02:49 98acab318002 Downloading [==========> ] 26.22MB/121.9MB 17:02:49 9fa9226be034 Pull complete 17:02:49 6cf350721225 Extracting [===================================> ] 69.07MB/98.32MB 17:02:49 1617e25568b2 Extracting [===> ] 32.77kB/480.9kB 17:02:49 56f27190e824 Extracting [======================================> ] 28.7MB/37.11MB 17:02:49 56f27190e824 Extracting [======================================> ] 28.7MB/37.11MB 17:02:49 a40760cd2625 Extracting [=====================================> ] 63.5MB/84.46MB 17:02:49 ad1782e4d1ef Extracting [==============================> ] 108.6MB/180.4MB 17:02:50 f270a5fd7930 Extracting [============================================> ] 142.6MB/159.1MB 17:02:50 25b95a09a872 Downloading [============> ] 15.14MB/58.98MB 17:02:50 c8e6f0452a8e Pull complete 17:02:50 98acab318002 Downloading [==============> ] 35.33MB/121.9MB 17:02:50 0143f8517101 Extracting [==================================================>] 5.324kB/5.324kB 17:02:50 0143f8517101 Extracting [==================================================>] 5.324kB/5.324kB 17:02:50 8e70b9b9b078 Downloading [=======================> ] 126.5MB/272.7MB 17:02:50 8e70b9b9b078 Downloading [=======================> ] 126.5MB/272.7MB 17:02:50 6cf350721225 Extracting [======================================> ] 76.32MB/98.32MB 17:02:50 56f27190e824 Extracting [===========================================> ] 32.24MB/37.11MB 17:02:50 56f27190e824 Extracting [===========================================> ] 32.24MB/37.11MB 17:02:50 a40760cd2625 Extracting [=========================================> ] 69.63MB/84.46MB 17:02:50 ad1782e4d1ef Extracting [==============================> ] 110.3MB/180.4MB 17:02:50 f270a5fd7930 Extracting [==============================================> ] 147.6MB/159.1MB 17:02:50 25b95a09a872 Downloading [=====================> ] 25.41MB/58.98MB 17:02:50 44779101e748 Pull complete 17:02:50 98acab318002 Downloading [===================> ] 47.14MB/121.9MB 17:02:50 8e70b9b9b078 Downloading [========================> ] 132.4MB/272.7MB 17:02:50 8e70b9b9b078 Downloading [========================> ] 132.4MB/272.7MB 17:02:50 1617e25568b2 Extracting [==================================> ] 327.7kB/480.9kB 17:02:50 a721db3e3f3d Extracting [> ] 65.54kB/5.526MB 17:02:50 6cf350721225 Extracting [===========================================> ] 84.67MB/98.32MB 17:02:50 56f27190e824 Extracting [=============================================> ] 33.82MB/37.11MB 17:02:50 56f27190e824 Extracting [=============================================> ] 33.82MB/37.11MB 17:02:50 a40760cd2625 Extracting [===========================================> ] 74.09MB/84.46MB 17:02:50 25b95a09a872 Downloading [==============================> ] 35.68MB/58.98MB 17:02:50 f270a5fd7930 Extracting [===============================================> ] 151MB/159.1MB 17:02:50 98acab318002 Downloading [========================> ] 60MB/121.9MB 17:02:50 ad1782e4d1ef Extracting [===============================> ] 112.5MB/180.4MB 17:02:50 0143f8517101 Pull complete 17:02:50 8e70b9b9b078 Downloading [=========================> ] 140.4MB/272.7MB 17:02:50 8e70b9b9b078 Downloading [=========================> ] 140.4MB/272.7MB 17:02:50 ee69cc1a77e2 Extracting [==================================================>] 5.312kB/5.312kB 17:02:50 ee69cc1a77e2 Extracting [==================================================>] 5.312kB/5.312kB 17:02:50 6cf350721225 Extracting [=============================================> ] 90.24MB/98.32MB 17:02:50 1617e25568b2 Extracting [===============================================> ] 458.8kB/480.9kB 17:02:50 a40760cd2625 Extracting [===============================================> ] 80.22MB/84.46MB 17:02:50 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB 17:02:50 56f27190e824 Extracting [==============================================> ] 34.6MB/37.11MB 17:02:50 56f27190e824 Extracting [==============================================> ] 34.6MB/37.11MB 17:02:50 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB 17:02:50 25b95a09a872 Downloading [=========================================> ] 48.66MB/58.98MB 17:02:50 f270a5fd7930 Extracting [=================================================> ] 156MB/159.1MB 17:02:50 8e70b9b9b078 Downloading [===========================> ] 149.5MB/272.7MB 17:02:50 8e70b9b9b078 Downloading [===========================> ] 149.5MB/272.7MB 17:02:50 98acab318002 Downloading [=============================> ] 71.81MB/121.9MB 17:02:50 a40760cd2625 Extracting [==================================================>] 84.46MB/84.46MB 17:02:50 ad1782e4d1ef Extracting [===============================> ] 114.8MB/180.4MB 17:02:50 6cf350721225 Extracting [=================================================> ] 96.93MB/98.32MB 17:02:50 a721db3e3f3d Extracting [==> ] 262.1kB/5.526MB 17:02:50 6cf350721225 Extracting [==================================================>] 98.32MB/98.32MB 17:02:50 25b95a09a872 Downloading [=================================================> ] 58.93MB/58.98MB 17:02:50 25b95a09a872 Verifying Checksum 17:02:50 25b95a09a872 Download complete 17:02:50 f270a5fd7930 Extracting [=================================================> ] 158.8MB/159.1MB 17:02:50 8e70b9b9b078 Downloading [============================> ] 155.5MB/272.7MB 17:02:50 8e70b9b9b078 Downloading [============================> ] 155.5MB/272.7MB 17:02:50 98acab318002 Downloading [================================> ] 79.31MB/121.9MB 17:02:50 56f27190e824 Extracting [================================================> ] 36.18MB/37.11MB 17:02:50 56f27190e824 Extracting [================================================> ] 36.18MB/37.11MB 17:02:50 f270a5fd7930 Extracting [==================================================>] 159.1MB/159.1MB 17:02:50 a40760cd2625 Pull complete 17:02:50 a721db3e3f3d Extracting [===================> ] 2.163MB/5.526MB 17:02:50 ad1782e4d1ef Extracting [================================> ] 117MB/180.4MB 17:02:50 9010eb24e726 Downloading [> ] 539.6kB/53.89MB 17:02:50 114f99593bd8 Extracting [==================================================>] 1.119kB/1.119kB 17:02:50 6cf350721225 Pull complete 17:02:50 56f27190e824 Extracting [==================================================>] 37.11MB/37.11MB 17:02:50 56f27190e824 Extracting [==================================================>] 37.11MB/37.11MB 17:02:50 1617e25568b2 Pull complete 17:02:50 114f99593bd8 Extracting [==================================================>] 1.119kB/1.119kB 17:02:50 de723b4c7ed9 Extracting [==================================================>] 1.297kB/1.297kB 17:02:50 de723b4c7ed9 Extracting [==================================================>] 1.297kB/1.297kB 17:02:50 ee69cc1a77e2 Pull complete 17:02:50 81667b400b57 Extracting [==================================================>] 1.034kB/1.034kB 17:02:50 f270a5fd7930 Pull complete 17:02:50 9038eaba24f8 Extracting [==================================================>] 1.153kB/1.153kB 17:02:50 81667b400b57 Extracting [==================================================>] 1.034kB/1.034kB 17:02:50 9038eaba24f8 Extracting [==================================================>] 1.153kB/1.153kB 17:02:50 8e70b9b9b078 Downloading [=============================> ] 163MB/272.7MB 17:02:50 8e70b9b9b078 Downloading [=============================> ] 163MB/272.7MB 17:02:50 98acab318002 Downloading [=====================================> ] 92.19MB/121.9MB 17:02:50 9010eb24e726 Downloading [======> ] 7.028MB/53.89MB 17:02:50 a721db3e3f3d Extracting [========================================> ] 4.456MB/5.526MB 17:02:50 ad1782e4d1ef Extracting [=================================> ] 119.8MB/180.4MB 17:02:50 56f27190e824 Pull complete 17:02:50 56f27190e824 Pull complete 17:02:50 114f99593bd8 Pull complete 17:02:50 81667b400b57 Pull complete 17:02:50 ec3b6d0cc414 Extracting [==================================================>] 1.036kB/1.036kB 17:02:50 api Pulled 17:02:50 ec3b6d0cc414 Extracting [==================================================>] 1.036kB/1.036kB 17:02:50 8e70b9b9b078 Downloading [===============================> ] 172.1MB/272.7MB 17:02:50 8e70b9b9b078 Downloading [===============================> ] 172.1MB/272.7MB 17:02:50 98acab318002 Downloading [=========================================> ] 101.3MB/121.9MB 17:02:50 de723b4c7ed9 Pull complete 17:02:50 25b95a09a872 Extracting [> ] 557.1kB/58.98MB 17:02:50 9038eaba24f8 Pull complete 17:02:50 04a7796b82ca Extracting [==================================================>] 1.127kB/1.127kB 17:02:50 pap Pulled 17:02:50 04a7796b82ca Extracting [==================================================>] 1.127kB/1.127kB 17:02:50 9010eb24e726 Downloading [================> ] 17.3MB/53.89MB 17:02:50 a721db3e3f3d Extracting [==========================================> ] 4.719MB/5.526MB 17:02:50 ad1782e4d1ef Extracting [==================================> ] 123.1MB/180.4MB 17:02:50 98acab318002 Downloading [================================================> ] 117.4MB/121.9MB 17:02:50 8e70b9b9b078 Downloading [=================================> ] 183.9MB/272.7MB 17:02:50 8e70b9b9b078 Downloading [=================================> ] 183.9MB/272.7MB 17:02:50 ec3b6d0cc414 Pull complete 17:02:50 a8d3998ab21c Extracting [==================================================>] 13.9kB/13.9kB 17:02:50 a8d3998ab21c Extracting [==================================================>] 13.9kB/13.9kB 17:02:50 a721db3e3f3d Extracting [==================================================>] 5.526MB/5.526MB 17:02:50 25b95a09a872 Extracting [===> ] 3.899MB/58.98MB 17:02:50 98acab318002 Verifying Checksum 17:02:50 98acab318002 Download complete 17:02:50 9010eb24e726 Downloading [============================> ] 30.28MB/53.89MB 17:02:50 faa5b6876931 Downloading [==================================================>] 604B/604B 17:02:50 ad1782e4d1ef Extracting [==================================> ] 125.9MB/180.4MB 17:02:50 faa5b6876931 Verifying Checksum 17:02:50 faa5b6876931 Download complete 17:02:50 7773e9699356 Downloading [==================================================>] 2.677kB/2.677kB 17:02:50 7773e9699356 Verifying Checksum 17:02:50 7773e9699356 Download complete 17:02:50 8e70b9b9b078 Downloading [===================================> ] 196.2MB/272.7MB 17:02:50 8e70b9b9b078 Downloading [===================================> ] 196.2MB/272.7MB 17:02:50 a721db3e3f3d Pull complete 17:02:50 04a7796b82ca Pull complete 17:02:50 1850a929b84a Extracting [==================================================>] 149B/149B 17:02:50 1850a929b84a Extracting [==================================================>] 149B/149B 17:02:50 0f68bbe907b1 Downloading [================================================> ] 3.011kB/3.088kB 17:02:50 0f68bbe907b1 Downloading [==================================================>] 3.088kB/3.088kB 17:02:50 0f68bbe907b1 Verifying Checksum 17:02:50 0f68bbe907b1 Download complete 17:02:50 25b95a09a872 Extracting [=====> ] 6.128MB/58.98MB 17:02:50 simulator Pulled 17:02:50 9010eb24e726 Downloading [========================================> ] 43.79MB/53.89MB 17:02:50 4357144b1367 Downloading [=====================================> ] 3.011kB/4.023kB 17:02:50 4357144b1367 Downloading [==================================================>] 4.023kB/4.023kB 17:02:50 4357144b1367 Verifying Checksum 17:02:50 4357144b1367 Download complete 17:02:50 49f3e8dc63fd Downloading [==================================================>] 1.44kB/1.44kB 17:02:50 49f3e8dc63fd Verifying Checksum 17:02:50 49f3e8dc63fd Download complete 17:02:50 ad1782e4d1ef Extracting [===================================> ] 128.7MB/180.4MB 17:02:50 7c2a431bc9c9 Downloading [=> ] 3.009kB/135.3kB 17:02:50 7c2a431bc9c9 Downloading [==================================================>] 135.3kB/135.3kB 17:02:50 7c2a431bc9c9 Download complete 17:02:50 9010eb24e726 Verifying Checksum 17:02:50 9010eb24e726 Download complete 17:02:50 8e70b9b9b078 Downloading [======================================> ] 212.2MB/272.7MB 17:02:50 8e70b9b9b078 Downloading [======================================> ] 212.2MB/272.7MB 17:02:50 a8d3998ab21c Pull complete 17:02:50 89d6e2ec6372 Extracting [==================================================>] 13.79kB/13.79kB 17:02:50 89d6e2ec6372 Extracting [==================================================>] 13.79kB/13.79kB 17:02:50 300fd05d2cfc Downloading [==================================================>] 100B/100B 17:02:50 300fd05d2cfc Verifying Checksum 17:02:50 300fd05d2cfc Download complete 17:02:50 5ec9e969968e Downloading [==================================================>] 717B/717B 17:02:50 5ec9e969968e Verifying Checksum 17:02:50 5ec9e969968e Download complete 17:02:50 1850a929b84a Pull complete 17:02:50 397a918c7da3 Extracting [==================================================>] 327B/327B 17:02:50 397a918c7da3 Extracting [==================================================>] 327B/327B 17:02:50 25b95a09a872 Extracting [=======> ] 8.913MB/58.98MB 17:02:51 ad1782e4d1ef Extracting [====================================> ] 133.1MB/180.4MB 17:02:51 8e70b9b9b078 Downloading [=========================================> ] 226.7MB/272.7MB 17:02:51 8e70b9b9b078 Downloading [=========================================> ] 226.7MB/272.7MB 17:02:51 5df3538dc51e Downloading [=========> ] 719B/3.627kB 17:02:51 5df3538dc51e Downloading [==================================================>] 3.627kB/3.627kB 17:02:51 5df3538dc51e Verifying Checksum 17:02:51 5df3538dc51e Download complete 17:02:51 519f42193ec8 Downloading [> ] 527.6kB/121.9MB 17:02:51 25b95a09a872 Extracting [=========> ] 11.7MB/58.98MB 17:02:51 43c4264eed91 Downloading [> ] 48.06kB/3.624MB 17:02:51 89d6e2ec6372 Pull complete 17:02:51 80096f8bb25e Extracting [==================================================>] 2.238kB/2.238kB 17:02:51 80096f8bb25e Extracting [==================================================>] 2.238kB/2.238kB 17:02:51 397a918c7da3 Pull complete 17:02:51 ad1782e4d1ef Extracting [=====================================> ] 137MB/180.4MB 17:02:51 8e70b9b9b078 Downloading [============================================> ] 242.2MB/272.7MB 17:02:51 8e70b9b9b078 Downloading [============================================> ] 242.2MB/272.7MB 17:02:51 519f42193ec8 Downloading [====> ] 11.8MB/121.9MB 17:02:51 43c4264eed91 Downloading [=============> ] 981.9kB/3.624MB 17:02:51 25b95a09a872 Extracting [===========> ] 13.93MB/58.98MB 17:02:51 80096f8bb25e Pull complete 17:02:51 806be17e856d Extracting [> ] 557.1kB/89.72MB 17:02:51 cbd359ebc87d Extracting [==================================================>] 2.23kB/2.23kB 17:02:51 cbd359ebc87d Extracting [==================================================>] 2.23kB/2.23kB 17:02:51 ad1782e4d1ef Extracting [======================================> ] 139.3MB/180.4MB 17:02:51 8e70b9b9b078 Downloading [=============================================> ] 250.8MB/272.7MB 17:02:51 8e70b9b9b078 Downloading [=============================================> ] 250.8MB/272.7MB 17:02:51 43c4264eed91 Downloading [==================================================>] 3.624MB/3.624MB 17:02:51 43c4264eed91 Download complete 17:02:51 519f42193ec8 Downloading [==========> ] 25.73MB/121.9MB 17:02:51 43c4264eed91 Extracting [> ] 65.54kB/3.624MB 17:02:51 6d083c262ef7 Downloading [==================================================>] 141B/141B 17:02:51 6d083c262ef7 Verifying Checksum 17:02:51 6d083c262ef7 Download complete 17:02:51 25b95a09a872 Extracting [==============> ] 16.71MB/58.98MB 17:02:51 61ad5fa1174a Downloading [> ] 48.06kB/3.488MB 17:02:51 806be17e856d Extracting [=> ] 3.342MB/89.72MB 17:02:51 ad1782e4d1ef Extracting [=======================================> ] 142MB/180.4MB 17:02:51 cbd359ebc87d Pull complete 17:02:51 8e70b9b9b078 Downloading [===============================================> ] 261MB/272.7MB 17:02:51 8e70b9b9b078 Downloading [===============================================> ] 261MB/272.7MB 17:02:51 policy-db-migrator Pulled 17:02:51 43c4264eed91 Extracting [=====> ] 393.2kB/3.624MB 17:02:51 519f42193ec8 Downloading [================> ] 39.76MB/121.9MB 17:02:51 25b95a09a872 Extracting [==================> ] 21.73MB/58.98MB 17:02:51 61ad5fa1174a Downloading [==================================================>] 3.488MB/3.488MB 17:02:51 61ad5fa1174a Verifying Checksum 17:02:51 61ad5fa1174a Download complete 17:02:51 faa25921ada9 Downloading [> ] 48.06kB/4.324MB 17:02:51 43c4264eed91 Extracting [==================================================>] 3.624MB/3.624MB 17:02:51 ad1782e4d1ef Extracting [========================================> ] 145.4MB/180.4MB 17:02:51 806be17e856d Extracting [===> ] 6.128MB/89.72MB 17:02:51 8e70b9b9b078 Downloading [=================================================> ] 270.7MB/272.7MB 17:02:51 8e70b9b9b078 Downloading [=================================================> ] 270.7MB/272.7MB 17:02:51 519f42193ec8 Downloading [======================> ] 54.84MB/121.9MB 17:02:51 8e70b9b9b078 Verifying Checksum 17:02:51 8e70b9b9b078 Download complete 17:02:51 8e70b9b9b078 Verifying Checksum 17:02:51 8e70b9b9b078 Download complete 17:02:51 faa25921ada9 Downloading [==================================================>] 4.324MB/4.324MB 17:02:51 faa25921ada9 Verifying Checksum 17:02:51 faa25921ada9 Download complete 17:02:51 ee380b05b513 Downloading [==> ] 3.01kB/52.66kB 17:02:51 ee380b05b513 Downloading [==================================================>] 52.66kB/52.66kB 17:02:51 ee380b05b513 Verifying Checksum 17:02:51 ee380b05b513 Download complete 17:02:51 25b95a09a872 Extracting [===================> ] 23.4MB/58.98MB 17:02:51 1f925eb55205 Downloading [=====> ] 3.01kB/26.4kB 17:02:51 1f925eb55205 Downloading [==================================================>] 26.4kB/26.4kB 17:02:51 1f925eb55205 Verifying Checksum 17:02:51 1f925eb55205 Download complete 17:02:51 90075272e866 Downloading [> ] 539.6kB/65.66MB 17:02:51 43c4264eed91 Pull complete 17:02:51 6d083c262ef7 Extracting [==================================================>] 141B/141B 17:02:51 6d083c262ef7 Extracting [==================================================>] 141B/141B 17:02:51 915d19682ac9 Downloading [> ] 539.6kB/55.51MB 17:02:51 ad1782e4d1ef Extracting [=========================================> ] 149.8MB/180.4MB 17:02:51 519f42193ec8 Downloading [============================> ] 70.45MB/121.9MB 17:02:51 806be17e856d Extracting [=====> ] 10.03MB/89.72MB 17:02:51 8e70b9b9b078 Extracting [> ] 557.1kB/272.7MB 17:02:51 8e70b9b9b078 Extracting [> ] 557.1kB/272.7MB 17:02:51 25b95a09a872 Extracting [========================> ] 28.41MB/58.98MB 17:02:51 90075272e866 Downloading [=====> ] 7.568MB/65.66MB 17:02:51 915d19682ac9 Downloading [=========> ] 10.27MB/55.51MB 17:02:51 ad1782e4d1ef Extracting [==========================================> ] 153.2MB/180.4MB 17:02:51 519f42193ec8 Downloading [===================================> ] 87.12MB/121.9MB 17:02:51 806be17e856d Extracting [======> ] 12.26MB/89.72MB 17:02:51 25b95a09a872 Extracting [===========================> ] 32.87MB/58.98MB 17:02:51 8e70b9b9b078 Extracting [=> ] 9.47MB/272.7MB 17:02:51 8e70b9b9b078 Extracting [=> ] 9.47MB/272.7MB 17:02:51 6d083c262ef7 Pull complete 17:02:51 61ad5fa1174a Extracting [> ] 65.54kB/3.488MB 17:02:51 90075272e866 Downloading [=============> ] 18.38MB/65.66MB 17:02:51 915d19682ac9 Downloading [==================> ] 20.54MB/55.51MB 17:02:51 519f42193ec8 Downloading [=========================================> ] 102.2MB/121.9MB 17:02:51 ad1782e4d1ef Extracting [===========================================> ] 156.5MB/180.4MB 17:02:51 806be17e856d Extracting [=========> ] 16.15MB/89.72MB 17:02:51 8e70b9b9b078 Extracting [===> ] 17.83MB/272.7MB 17:02:51 8e70b9b9b078 Extracting [===> ] 17.83MB/272.7MB 17:02:51 25b95a09a872 Extracting [=================================> ] 39.55MB/58.98MB 17:02:51 90075272e866 Downloading [=========================> ] 34.06MB/65.66MB 17:02:51 61ad5fa1174a Extracting [====> ] 327.7kB/3.488MB 17:02:51 519f42193ec8 Downloading [==============================================> ] 113.5MB/121.9MB 17:02:51 ad1782e4d1ef Extracting [============================================> ] 158.8MB/180.4MB 17:02:51 806be17e856d Extracting [==========> ] 18.38MB/89.72MB 17:02:51 8e70b9b9b078 Extracting [====> ] 25.62MB/272.7MB 17:02:51 8e70b9b9b078 Extracting [====> ] 25.62MB/272.7MB 17:02:51 25b95a09a872 Extracting [========================================> ] 47.35MB/58.98MB 17:02:51 519f42193ec8 Verifying Checksum 17:02:51 519f42193ec8 Download complete 17:02:51 61ad5fa1174a Extracting [=================================> ] 2.359MB/3.488MB 17:02:52 ad1782e4d1ef Extracting [============================================> ] 161MB/180.4MB 17:02:52 61ad5fa1174a Extracting [==================================================>] 3.488MB/3.488MB 17:02:52 25b95a09a872 Extracting [===============================================> ] 55.71MB/58.98MB 17:02:52 8e70b9b9b078 Extracting [=====> ] 29.52MB/272.7MB 17:02:52 8e70b9b9b078 Extracting [=====> ] 29.52MB/272.7MB 17:02:52 806be17e856d Extracting [===========> ] 21.17MB/89.72MB 17:02:52 61ad5fa1174a Pull complete 17:02:52 faa25921ada9 Extracting [> ] 65.54kB/4.324MB 17:02:52 ad1782e4d1ef Extracting [=============================================> ] 162.7MB/180.4MB 17:02:52 25b95a09a872 Extracting [=================================================> ] 58.49MB/58.98MB 17:02:52 8e70b9b9b078 Extracting [=====> ] 30.64MB/272.7MB 17:02:52 8e70b9b9b078 Extracting [=====> ] 30.64MB/272.7MB 17:02:52 806be17e856d Extracting [============> ] 22.84MB/89.72MB 17:02:52 25b95a09a872 Extracting [==================================================>] 58.98MB/58.98MB 17:02:52 faa25921ada9 Extracting [======> ] 589.8kB/4.324MB 17:02:52 ad1782e4d1ef Extracting [=============================================> ] 164.9MB/180.4MB 17:02:52 8e70b9b9b078 Extracting [=======> ] 38.44MB/272.7MB 17:02:52 8e70b9b9b078 Extracting [=======> ] 38.44MB/272.7MB 17:02:52 25b95a09a872 Pull complete 17:02:52 806be17e856d Extracting [=============> ] 25.07MB/89.72MB 17:02:52 faa25921ada9 Extracting [============================================> ] 3.867MB/4.324MB 17:02:52 faa25921ada9 Extracting [==================================================>] 4.324MB/4.324MB 17:02:52 ad1782e4d1ef Extracting [==============================================> ] 169.3MB/180.4MB 17:02:52 8e70b9b9b078 Extracting [========> ] 49.02MB/272.7MB 17:02:52 8e70b9b9b078 Extracting [========> ] 49.02MB/272.7MB 17:02:52 faa25921ada9 Pull complete 17:02:52 ee380b05b513 Extracting [===============================> ] 32.77kB/52.66kB 17:02:52 ee380b05b513 Extracting [==================================================>] 52.66kB/52.66kB 17:02:52 806be17e856d Extracting [===============> ] 27.85MB/89.72MB 17:02:52 9010eb24e726 Extracting [> ] 557.1kB/53.89MB 17:02:52 ad1782e4d1ef Extracting [===============================================> ] 172.1MB/180.4MB 17:02:52 8e70b9b9b078 Extracting [==========> ] 57.38MB/272.7MB 17:02:52 8e70b9b9b078 Extracting [==========> ] 57.38MB/272.7MB 17:02:52 9010eb24e726 Extracting [====> ] 4.456MB/53.89MB 17:02:52 806be17e856d Extracting [=================> ] 30.64MB/89.72MB 17:02:52 ee380b05b513 Pull complete 17:02:52 ad1782e4d1ef Extracting [================================================> ] 173.8MB/180.4MB 17:02:52 8e70b9b9b078 Extracting [============> ] 67.4MB/272.7MB 17:02:52 8e70b9b9b078 Extracting [============> ] 67.4MB/272.7MB 17:02:52 1f925eb55205 Extracting [==================================================>] 26.4kB/26.4kB 17:02:52 1f925eb55205 Extracting [==================================================>] 26.4kB/26.4kB 17:02:52 9010eb24e726 Extracting [=======> ] 7.799MB/53.89MB 17:02:52 806be17e856d Extracting [==================> ] 33.42MB/89.72MB 17:02:52 8e70b9b9b078 Extracting [=============> ] 75.76MB/272.7MB 17:02:52 8e70b9b9b078 Extracting [=============> ] 75.76MB/272.7MB 17:02:52 ad1782e4d1ef Extracting [================================================> ] 176.6MB/180.4MB 17:02:52 1f925eb55205 Pull complete 17:02:52 806be17e856d Extracting [===================> ] 35.65MB/89.72MB 17:02:52 9010eb24e726 Extracting [=========> ] 10.58MB/53.89MB 17:02:52 8e70b9b9b078 Extracting [===============> ] 86.9MB/272.7MB 17:02:52 8e70b9b9b078 Extracting [===============> ] 86.9MB/272.7MB 17:02:52 ad1782e4d1ef Extracting [=================================================> ] 178.3MB/180.4MB 17:02:52 806be17e856d Extracting [=====================> ] 38.44MB/89.72MB 17:02:52 9010eb24e726 Extracting [============> ] 13.93MB/53.89MB 17:02:52 8e70b9b9b078 Extracting [==================> ] 100.8MB/272.7MB 17:02:52 8e70b9b9b078 Extracting [==================> ] 100.8MB/272.7MB 17:02:52 ad1782e4d1ef Extracting [=================================================> ] 179.9MB/180.4MB 17:02:53 806be17e856d Extracting [======================> ] 40.11MB/89.72MB 17:02:53 8e70b9b9b078 Extracting [===================> ] 107.5MB/272.7MB 17:02:53 8e70b9b9b078 Extracting [===================> ] 107.5MB/272.7MB 17:02:53 9010eb24e726 Extracting [==============> ] 15.6MB/53.89MB 17:02:53 ad1782e4d1ef Extracting [==================================================>] 180.4MB/180.4MB 17:02:53 806be17e856d Extracting [=======================> ] 41.78MB/89.72MB 17:02:53 9010eb24e726 Extracting [=================> ] 18.38MB/53.89MB 17:02:53 8e70b9b9b078 Extracting [====================> ] 112MB/272.7MB 17:02:53 8e70b9b9b078 Extracting [====================> ] 112MB/272.7MB 17:02:53 806be17e856d Extracting [========================> ] 43.45MB/89.72MB 17:02:53 8e70b9b9b078 Extracting [=====================> ] 115.3MB/272.7MB 17:02:53 8e70b9b9b078 Extracting [=====================> ] 115.3MB/272.7MB 17:02:53 806be17e856d Extracting [========================> ] 44.01MB/89.72MB 17:02:53 9010eb24e726 Extracting [==================> ] 19.5MB/53.89MB 17:02:53 806be17e856d Extracting [==========================> ] 47.91MB/89.72MB 17:02:53 8e70b9b9b078 Extracting [=====================> ] 119.8MB/272.7MB 17:02:53 8e70b9b9b078 Extracting [=====================> ] 119.8MB/272.7MB 17:02:53 9010eb24e726 Extracting [====================> ] 22.28MB/53.89MB 17:02:53 ad1782e4d1ef Pull complete 17:02:53 bc8105c6553b Extracting [===================> ] 32.77kB/84.13kB 17:02:53 bc8105c6553b Extracting [==================================================>] 84.13kB/84.13kB 17:02:53 bc8105c6553b Extracting [==================================================>] 84.13kB/84.13kB 17:02:53 8e70b9b9b078 Extracting [======================> ] 125.3MB/272.7MB 17:02:53 8e70b9b9b078 Extracting [======================> ] 125.3MB/272.7MB 17:02:53 806be17e856d Extracting [==============================> ] 54.59MB/89.72MB 17:02:53 9010eb24e726 Extracting [========================> ] 26.18MB/53.89MB 17:02:53 bc8105c6553b Pull complete 17:02:53 929241f867bb Extracting [==================================================>] 92B/92B 17:02:53 929241f867bb Extracting [==================================================>] 92B/92B 17:02:53 8e70b9b9b078 Extracting [=======================> ] 128.7MB/272.7MB 17:02:53 8e70b9b9b078 Extracting [=======================> ] 128.7MB/272.7MB 17:02:53 806be17e856d Extracting [================================> ] 59.05MB/89.72MB 17:02:53 9010eb24e726 Extracting [================================> ] 34.54MB/53.89MB 17:02:53 8e70b9b9b078 Extracting [========================> ] 133.7MB/272.7MB 17:02:53 8e70b9b9b078 Extracting [========================> ] 133.7MB/272.7MB 17:02:53 9010eb24e726 Extracting [===========================================> ] 46.79MB/53.89MB 17:02:53 806be17e856d Extracting [====================================> ] 64.62MB/89.72MB 17:02:53 8e70b9b9b078 Extracting [=========================> ] 137.6MB/272.7MB 17:02:53 8e70b9b9b078 Extracting [=========================> ] 137.6MB/272.7MB 17:02:53 806be17e856d Extracting [=====================================> ] 67.96MB/89.72MB 17:02:53 9010eb24e726 Extracting [==================================================>] 53.89MB/53.89MB 17:02:54 8e70b9b9b078 Extracting [=========================> ] 139.3MB/272.7MB 17:02:54 8e70b9b9b078 Extracting [=========================> ] 139.3MB/272.7MB 17:02:54 806be17e856d Extracting [=======================================> ] 70.75MB/89.72MB 17:02:54 929241f867bb Pull complete 17:02:54 8e70b9b9b078 Extracting [==========================> ] 144.3MB/272.7MB 17:02:54 8e70b9b9b078 Extracting [==========================> ] 144.3MB/272.7MB 17:02:54 9010eb24e726 Pull complete 17:02:54 37728a7352e6 Extracting [==================================================>] 92B/92B 17:02:54 37728a7352e6 Extracting [==================================================>] 92B/92B 17:02:54 faa5b6876931 Extracting [==================================================>] 604B/604B 17:02:54 faa5b6876931 Extracting [==================================================>] 604B/604B 17:02:54 806be17e856d Extracting [========================================> ] 72.97MB/89.72MB 17:02:54 8e70b9b9b078 Extracting [==========================> ] 146.5MB/272.7MB 17:02:54 8e70b9b9b078 Extracting [==========================> ] 146.5MB/272.7MB 17:02:54 806be17e856d Extracting [==========================================> ] 76.87MB/89.72MB 17:02:54 8e70b9b9b078 Extracting [===========================> ] 150.4MB/272.7MB 17:02:54 8e70b9b9b078 Extracting [===========================> ] 150.4MB/272.7MB 17:02:54 37728a7352e6 Pull complete 17:02:54 faa5b6876931 Pull complete 17:02:54 806be17e856d Extracting [============================================> ] 79.1MB/89.72MB 17:02:54 8e70b9b9b078 Extracting [============================> ] 154.3MB/272.7MB 17:02:54 8e70b9b9b078 Extracting [============================> ] 154.3MB/272.7MB 17:02:54 806be17e856d Extracting [==============================================> ] 83MB/89.72MB 17:02:54 7773e9699356 Extracting [==================================================>] 2.677kB/2.677kB 17:02:54 7773e9699356 Extracting [==================================================>] 2.677kB/2.677kB 17:02:54 8e70b9b9b078 Extracting [=============================> ] 159.9MB/272.7MB 17:02:54 8e70b9b9b078 Extracting [=============================> ] 159.9MB/272.7MB 17:02:54 806be17e856d Extracting [===============================================> ] 85.23MB/89.72MB 17:02:54 8e70b9b9b078 Extracting [=============================> ] 162.7MB/272.7MB 17:02:54 8e70b9b9b078 Extracting [=============================> ] 162.7MB/272.7MB 17:02:54 3f40c7aa46a6 Extracting [==================================================>] 302B/302B 17:02:54 3f40c7aa46a6 Extracting [==================================================>] 302B/302B 17:02:54 806be17e856d Extracting [================================================> ] 86.9MB/89.72MB 17:02:54 90075272e866 Downloading [==========================> ] 34.6MB/65.66MB 17:02:54 915d19682ac9 Downloading [===========================> ] 30.28MB/55.51MB 17:02:54 2766b5007f5a Downloading [============> ] 3.01kB/11.92kB 17:02:54 2766b5007f5a Download complete 17:02:54 8e70b9b9b078 Extracting [==============================> ] 165.4MB/272.7MB 17:02:54 8e70b9b9b078 Extracting [==============================> ] 165.4MB/272.7MB 17:02:54 806be17e856d Extracting [=================================================> ] 88.01MB/89.72MB 17:02:54 c077747462c3 Downloading [==================================================>] 1.225kB/1.225kB 17:02:54 c077747462c3 Verifying Checksum 17:02:54 c077747462c3 Download complete 17:02:54 90075272e866 Downloading [===================================> ] 47.04MB/65.66MB 17:02:54 915d19682ac9 Downloading [=======================================> ] 43.79MB/55.51MB 17:02:55 8e70b9b9b078 Extracting [==============================> ] 167.7MB/272.7MB 17:02:55 8e70b9b9b078 Extracting [==============================> ] 167.7MB/272.7MB 17:02:55 90075272e866 Downloading [==============================================> ] 60.55MB/65.66MB 17:02:55 915d19682ac9 Verifying Checksum 17:02:55 915d19682ac9 Download complete 17:02:55 806be17e856d Extracting [=================================================> ] 89.13MB/89.72MB 17:02:55 8e70b9b9b078 Extracting [===============================> ] 171MB/272.7MB 17:02:55 8e70b9b9b078 Extracting [===============================> ] 171MB/272.7MB 17:02:55 90075272e866 Verifying Checksum 17:02:55 90075272e866 Download complete 17:02:55 806be17e856d Extracting [==================================================>] 89.72MB/89.72MB 17:02:55 90075272e866 Extracting [> ] 557.1kB/65.66MB 17:02:55 8e70b9b9b078 Extracting [================================> ] 174.9MB/272.7MB 17:02:55 8e70b9b9b078 Extracting [================================> ] 174.9MB/272.7MB 17:02:55 90075272e866 Extracting [==> ] 3.899MB/65.66MB 17:02:55 8e70b9b9b078 Extracting [=================================> ] 180.5MB/272.7MB 17:02:55 8e70b9b9b078 Extracting [=================================> ] 180.5MB/272.7MB 17:02:55 90075272e866 Extracting [=====> ] 7.799MB/65.66MB 17:02:55 8e70b9b9b078 Extracting [=================================> ] 182.7MB/272.7MB 17:02:55 8e70b9b9b078 Extracting [=================================> ] 182.7MB/272.7MB 17:02:55 90075272e866 Extracting [========> ] 11.7MB/65.66MB 17:02:55 8e70b9b9b078 Extracting [=================================> ] 184.4MB/272.7MB 17:02:55 8e70b9b9b078 Extracting [=================================> ] 184.4MB/272.7MB 17:02:55 90075272e866 Extracting [=========> ] 12.81MB/65.66MB 17:02:55 90075272e866 Extracting [============> ] 16.71MB/65.66MB 17:02:55 8e70b9b9b078 Extracting [==================================> ] 186.1MB/272.7MB 17:02:55 8e70b9b9b078 Extracting [==================================> ] 186.1MB/272.7MB 17:02:55 90075272e866 Extracting [=================> ] 23.4MB/65.66MB 17:02:55 8e70b9b9b078 Extracting [==================================> ] 190MB/272.7MB 17:02:55 8e70b9b9b078 Extracting [==================================> ] 190MB/272.7MB 17:02:55 90075272e866 Extracting [=====================> ] 28.41MB/65.66MB 17:02:55 8e70b9b9b078 Extracting [===================================> ] 196.1MB/272.7MB 17:02:55 8e70b9b9b078 Extracting [===================================> ] 196.1MB/272.7MB 17:02:56 90075272e866 Extracting [==========================> ] 35.09MB/65.66MB 17:02:56 8e70b9b9b078 Extracting [====================================> ] 201.1MB/272.7MB 17:02:56 8e70b9b9b078 Extracting [====================================> ] 201.1MB/272.7MB 17:02:56 7773e9699356 Pull complete 17:02:56 0f68bbe907b1 Extracting [==================================================>] 3.088kB/3.088kB 17:02:56 0f68bbe907b1 Extracting [==================================================>] 3.088kB/3.088kB 17:02:56 3f40c7aa46a6 Pull complete 17:02:56 806be17e856d Pull complete 17:02:56 634de6c90876 Extracting [==================================================>] 3.49kB/3.49kB 17:02:56 634de6c90876 Extracting [==================================================>] 3.49kB/3.49kB 17:02:56 8e70b9b9b078 Extracting [=====================================> ] 202.2MB/272.7MB 17:02:56 8e70b9b9b078 Extracting [=====================================> ] 202.2MB/272.7MB 17:02:56 90075272e866 Extracting [=============================> ] 38.99MB/65.66MB 17:02:56 90075272e866 Extracting [================================> ] 42.89MB/65.66MB 17:02:56 8e70b9b9b078 Extracting [=====================================> ] 204.4MB/272.7MB 17:02:56 8e70b9b9b078 Extracting [=====================================> ] 204.4MB/272.7MB 17:02:56 353af139d39e Extracting [> ] 557.1kB/246.5MB 17:02:56 90075272e866 Extracting [====================================> ] 47.35MB/65.66MB 17:02:56 353af139d39e Extracting [> ] 1.114MB/246.5MB 17:02:56 90075272e866 Extracting [=======================================> ] 51.25MB/65.66MB 17:02:56 8e70b9b9b078 Extracting [=====================================> ] 205.6MB/272.7MB 17:02:56 8e70b9b9b078 Extracting [=====================================> ] 205.6MB/272.7MB 17:02:56 353af139d39e Extracting [==> ] 10.58MB/246.5MB 17:02:56 90075272e866 Extracting [=========================================> ] 54.59MB/65.66MB 17:02:56 8e70b9b9b078 Extracting [======================================> ] 207.8MB/272.7MB 17:02:56 8e70b9b9b078 Extracting [======================================> ] 207.8MB/272.7MB 17:02:57 353af139d39e Extracting [===> ] 17.27MB/246.5MB 17:02:57 90075272e866 Extracting [===========================================> ] 56.82MB/65.66MB 17:02:57 8e70b9b9b078 Extracting [======================================> ] 208.9MB/272.7MB 17:02:57 8e70b9b9b078 Extracting [======================================> ] 208.9MB/272.7MB 17:02:57 353af139d39e Extracting [====> ] 22.84MB/246.5MB 17:02:57 90075272e866 Extracting [==============================================> ] 60.72MB/65.66MB 17:02:57 8e70b9b9b078 Extracting [======================================> ] 211.1MB/272.7MB 17:02:57 8e70b9b9b078 Extracting [======================================> ] 211.1MB/272.7MB 17:02:57 353af139d39e Extracting [======> ] 33.42MB/246.5MB 17:02:57 90075272e866 Extracting [===============================================> ] 62.95MB/65.66MB 17:02:57 353af139d39e Extracting [=========> ] 47.91MB/246.5MB 17:02:57 90075272e866 Extracting [================================================> ] 64.06MB/65.66MB 17:02:57 8e70b9b9b078 Extracting [=======================================> ] 212.8MB/272.7MB 17:02:57 8e70b9b9b078 Extracting [=======================================> ] 212.8MB/272.7MB 17:02:57 90075272e866 Extracting [==================================================>] 65.66MB/65.66MB 17:02:57 353af139d39e Extracting [===========> ] 59.05MB/246.5MB 17:02:57 353af139d39e Extracting [=============> ] 67.4MB/246.5MB 17:02:57 8e70b9b9b078 Extracting [=======================================> ] 215MB/272.7MB 17:02:57 8e70b9b9b078 Extracting [=======================================> ] 215MB/272.7MB 17:02:57 353af139d39e Extracting [================> ] 79.66MB/246.5MB 17:02:57 8e70b9b9b078 Extracting [=======================================> ] 216.7MB/272.7MB 17:02:57 8e70b9b9b078 Extracting [=======================================> ] 216.7MB/272.7MB 17:02:57 353af139d39e Extracting [==================> ] 91.91MB/246.5MB 17:02:57 8e70b9b9b078 Extracting [========================================> ] 219.5MB/272.7MB 17:02:57 8e70b9b9b078 Extracting [========================================> ] 219.5MB/272.7MB 17:02:57 0f68bbe907b1 Pull complete 17:02:57 634de6c90876 Pull complete 17:02:57 353af139d39e Extracting [====================> ] 102.5MB/246.5MB 17:02:58 8e70b9b9b078 Extracting [========================================> ] 221.2MB/272.7MB 17:02:58 8e70b9b9b078 Extracting [========================================> ] 221.2MB/272.7MB 17:02:58 353af139d39e Extracting [======================> ] 112.5MB/246.5MB 17:02:58 8e70b9b9b078 Extracting [=========================================> ] 223.9MB/272.7MB 17:02:58 8e70b9b9b078 Extracting [=========================================> ] 223.9MB/272.7MB 17:02:58 353af139d39e Extracting [========================> ] 122.6MB/246.5MB 17:02:58 8e70b9b9b078 Extracting [=========================================> ] 225.6MB/272.7MB 17:02:58 8e70b9b9b078 Extracting [=========================================> ] 225.6MB/272.7MB 17:02:58 353af139d39e Extracting [===========================> ] 137.6MB/246.5MB 17:02:58 8e70b9b9b078 Extracting [==========================================> ] 230.1MB/272.7MB 17:02:58 8e70b9b9b078 Extracting [==========================================> ] 230.1MB/272.7MB 17:02:58 353af139d39e Extracting [=============================> ] 147.6MB/246.5MB 17:02:58 8e70b9b9b078 Extracting [==========================================> ] 234MB/272.7MB 17:02:58 8e70b9b9b078 Extracting [==========================================> ] 234MB/272.7MB 17:02:58 353af139d39e Extracting [===============================> ] 156.5MB/246.5MB 17:02:58 8e70b9b9b078 Extracting [===========================================> ] 236.2MB/272.7MB 17:02:58 8e70b9b9b078 Extracting [===========================================> ] 236.2MB/272.7MB 17:02:58 353af139d39e Extracting [=================================> ] 165.4MB/246.5MB 17:02:58 8e70b9b9b078 Extracting [===========================================> ] 237.9MB/272.7MB 17:02:58 8e70b9b9b078 Extracting [===========================================> ] 237.9MB/272.7MB 17:02:58 353af139d39e Extracting [===================================> ] 176MB/246.5MB 17:02:58 8e70b9b9b078 Extracting [============================================> ] 240.1MB/272.7MB 17:02:58 8e70b9b9b078 Extracting [============================================> ] 240.1MB/272.7MB 17:02:58 353af139d39e Extracting [=====================================> ] 184.9MB/246.5MB 17:02:58 cd00854cfb1a Extracting [==================================================>] 6.971kB/6.971kB 17:02:58 cd00854cfb1a Extracting [==================================================>] 6.971kB/6.971kB 17:02:59 90075272e866 Pull complete 17:02:59 4357144b1367 Extracting [==================================================>] 4.023kB/4.023kB 17:02:59 4357144b1367 Extracting [==================================================>] 4.023kB/4.023kB 17:02:59 353af139d39e Extracting [======================================> ] 191.6MB/246.5MB 17:02:59 8e70b9b9b078 Extracting [============================================> ] 242.3MB/272.7MB 17:02:59 8e70b9b9b078 Extracting [============================================> ] 242.3MB/272.7MB 17:02:59 353af139d39e Extracting [========================================> ] 200MB/246.5MB 17:02:59 8e70b9b9b078 Extracting [=============================================> ] 246.8MB/272.7MB 17:02:59 8e70b9b9b078 Extracting [=============================================> ] 246.8MB/272.7MB 17:02:59 353af139d39e Extracting [=========================================> ] 206.1MB/246.5MB 17:02:59 8e70b9b9b078 Extracting [=============================================> ] 250.7MB/272.7MB 17:02:59 8e70b9b9b078 Extracting [=============================================> ] 250.7MB/272.7MB 17:02:59 353af139d39e Extracting [============================================> ] 217.3MB/246.5MB 17:02:59 8e70b9b9b078 Extracting [==============================================> ] 253.5MB/272.7MB 17:02:59 8e70b9b9b078 Extracting [==============================================> ] 253.5MB/272.7MB 17:02:59 353af139d39e Extracting [==============================================> ] 227.8MB/246.5MB 17:02:59 8e70b9b9b078 Extracting [===============================================> ] 258.5MB/272.7MB 17:02:59 8e70b9b9b078 Extracting [===============================================> ] 258.5MB/272.7MB 17:02:59 353af139d39e Extracting [================================================> ] 238.4MB/246.5MB 17:02:59 353af139d39e Extracting [==================================================>] 246.5MB/246.5MB 17:02:59 8e70b9b9b078 Extracting [================================================> ] 266.3MB/272.7MB 17:02:59 8e70b9b9b078 Extracting [================================================> ] 266.3MB/272.7MB 17:02:59 8e70b9b9b078 Extracting [=================================================> ] 270.7MB/272.7MB 17:02:59 8e70b9b9b078 Extracting [=================================================> ] 270.7MB/272.7MB 17:02:59 8e70b9b9b078 Extracting [==================================================>] 272.7MB/272.7MB 17:02:59 8e70b9b9b078 Extracting [==================================================>] 272.7MB/272.7MB 17:03:00 915d19682ac9 Extracting [> ] 557.1kB/55.51MB 17:03:00 cd00854cfb1a Pull complete 17:03:00 4357144b1367 Pull complete 17:03:00 915d19682ac9 Extracting [=> ] 1.114MB/55.51MB 17:03:01 353af139d39e Pull complete 17:03:01 49f3e8dc63fd Extracting [==================================================>] 1.44kB/1.44kB 17:03:01 49f3e8dc63fd Extracting [==================================================>] 1.44kB/1.44kB 17:03:01 915d19682ac9 Extracting [=> ] 1.671MB/55.51MB 17:03:01 8e70b9b9b078 Pull complete 17:03:01 8e70b9b9b078 Pull complete 17:03:01 915d19682ac9 Extracting [==> ] 2.228MB/55.51MB 17:03:01 915d19682ac9 Extracting [===> ] 3.899MB/55.51MB 17:03:01 915d19682ac9 Extracting [====> ] 5.014MB/55.51MB 17:03:02 915d19682ac9 Extracting [======> ] 7.242MB/55.51MB 17:03:02 915d19682ac9 Extracting [========> ] 8.913MB/55.51MB 17:03:02 915d19682ac9 Extracting [=========> ] 10.58MB/55.51MB 17:03:02 915d19682ac9 Extracting [===========> ] 12.81MB/55.51MB 17:03:03 732c9ebb730c Extracting [==================================================>] 1.111kB/1.111kB 17:03:03 732c9ebb730c Extracting [==================================================>] 1.111kB/1.111kB 17:03:03 732c9ebb730c Extracting [==================================================>] 1.111kB/1.111kB 17:03:03 732c9ebb730c Extracting [==================================================>] 1.111kB/1.111kB 17:03:03 mariadb Pulled 17:03:03 915d19682ac9 Extracting [============> ] 13.37MB/55.51MB 17:03:03 915d19682ac9 Extracting [===============> ] 16.71MB/55.51MB 17:03:03 915d19682ac9 Extracting [=================> ] 19.5MB/55.51MB 17:03:03 915d19682ac9 Extracting [====================> ] 22.28MB/55.51MB 17:03:03 915d19682ac9 Extracting [=======================> ] 25.62MB/55.51MB 17:03:03 915d19682ac9 Extracting [==========================> ] 28.97MB/55.51MB 17:03:04 915d19682ac9 Extracting [=============================> ] 32.31MB/55.51MB 17:03:04 915d19682ac9 Extracting [===============================> ] 35.09MB/55.51MB 17:03:04 915d19682ac9 Extracting [===================================> ] 38.99MB/55.51MB 17:03:04 49f3e8dc63fd Pull complete 17:03:04 915d19682ac9 Extracting [=====================================> ] 41.78MB/55.51MB 17:03:04 915d19682ac9 Extracting [========================================> ] 44.56MB/55.51MB 17:03:04 915d19682ac9 Extracting [===========================================> ] 47.91MB/55.51MB 17:03:05 915d19682ac9 Extracting [================================================> ] 53.48MB/55.51MB 17:03:05 7c2a431bc9c9 Extracting [============> ] 32.77kB/135.3kB 17:03:05 7c2a431bc9c9 Extracting [==================================================>] 135.3kB/135.3kB 17:03:05 7c2a431bc9c9 Extracting [==================================================>] 135.3kB/135.3kB 17:03:05 apex-pdp Pulled 17:03:05 732c9ebb730c Pull complete 17:03:05 732c9ebb730c Pull complete 17:03:05 915d19682ac9 Extracting [================================================> ] 54.03MB/55.51MB 17:03:05 915d19682ac9 Extracting [==================================================>] 55.51MB/55.51MB 17:03:05 ed746366f1b8 Extracting [> ] 98.3kB/8.378MB 17:03:05 ed746366f1b8 Extracting [> ] 98.3kB/8.378MB 17:03:05 ed746366f1b8 Extracting [========================> ] 4.03MB/8.378MB 17:03:05 ed746366f1b8 Extracting [========================> ] 4.03MB/8.378MB 17:03:05 ed746366f1b8 Extracting [==================================================>] 8.378MB/8.378MB 17:03:05 ed746366f1b8 Extracting [==================================================>] 8.378MB/8.378MB 17:03:07 915d19682ac9 Pull complete 17:03:07 7c2a431bc9c9 Pull complete 17:03:09 300fd05d2cfc Extracting [==================================================>] 100B/100B 17:03:09 300fd05d2cfc Extracting [==================================================>] 100B/100B 17:03:09 ed746366f1b8 Pull complete 17:03:09 ed746366f1b8 Pull complete 17:03:10 2766b5007f5a Extracting [==================================================>] 11.92kB/11.92kB 17:03:10 2766b5007f5a Extracting [==================================================>] 11.92kB/11.92kB 17:03:12 10894799ccd9 Extracting [==================================================>] 21.28kB/21.28kB 17:03:12 10894799ccd9 Extracting [==================================================>] 21.28kB/21.28kB 17:03:12 10894799ccd9 Extracting [==================================================>] 21.28kB/21.28kB 17:03:12 10894799ccd9 Extracting [==================================================>] 21.28kB/21.28kB 17:03:12 300fd05d2cfc Pull complete 17:03:12 2766b5007f5a Pull complete 17:03:12 5ec9e969968e Extracting [==================================================>] 717B/717B 17:03:12 5ec9e969968e Extracting [==================================================>] 717B/717B 17:03:12 c077747462c3 Extracting [==================================================>] 1.225kB/1.225kB 17:03:12 c077747462c3 Extracting [==================================================>] 1.225kB/1.225kB 17:03:12 10894799ccd9 Pull complete 17:03:12 10894799ccd9 Pull complete 17:03:12 5ec9e969968e Pull complete 17:03:12 c077747462c3 Pull complete 17:03:12 prometheus Pulled 17:03:12 grafana Pulled 17:03:12 8d377259558c Extracting [> ] 458.8kB/43.24MB 17:03:12 8d377259558c Extracting [> ] 458.8kB/43.24MB 17:03:12 8d377259558c Extracting [====================> ] 17.89MB/43.24MB 17:03:12 8d377259558c Extracting [====================> ] 17.89MB/43.24MB 17:03:12 8d377259558c Extracting [==========================================> ] 37.16MB/43.24MB 17:03:12 8d377259558c Extracting [==========================================> ] 37.16MB/43.24MB 17:03:12 8d377259558c Extracting [==================================================>] 43.24MB/43.24MB 17:03:12 8d377259558c Extracting [==================================================>] 43.24MB/43.24MB 17:03:12 8d377259558c Pull complete 17:03:12 8d377259558c Pull complete 17:03:12 e7688095d1e6 Extracting [==================================================>] 1.106kB/1.106kB 17:03:12 e7688095d1e6 Extracting [==================================================>] 1.106kB/1.106kB 17:03:12 e7688095d1e6 Extracting [==================================================>] 1.106kB/1.106kB 17:03:12 e7688095d1e6 Extracting [==================================================>] 1.106kB/1.106kB 17:03:12 e7688095d1e6 Pull complete 17:03:12 e7688095d1e6 Pull complete 17:03:12 8eab815b3593 Extracting [==================================================>] 853B/853B 17:03:12 8eab815b3593 Extracting [==================================================>] 853B/853B 17:03:12 8eab815b3593 Extracting [==================================================>] 853B/853B 17:03:12 8eab815b3593 Extracting [==================================================>] 853B/853B 17:03:12 8eab815b3593 Pull complete 17:03:12 8eab815b3593 Pull complete 17:03:12 00ded6dd259e Extracting [==================================================>] 98B/98B 17:03:12 00ded6dd259e Extracting [==================================================>] 98B/98B 17:03:12 00ded6dd259e Extracting [==================================================>] 98B/98B 17:03:12 00ded6dd259e Extracting [==================================================>] 98B/98B 17:03:12 00ded6dd259e Pull complete 17:03:12 00ded6dd259e Pull complete 17:03:12 296f622c8150 Extracting [==================================================>] 172B/172B 17:03:12 296f622c8150 Extracting [==================================================>] 172B/172B 17:03:12 296f622c8150 Extracting [==================================================>] 172B/172B 17:03:12 296f622c8150 Extracting [==================================================>] 172B/172B 17:03:13 296f622c8150 Pull complete 17:03:13 296f622c8150 Pull complete 17:03:13 4ee3050cff6b Extracting [=======> ] 32.77kB/230.6kB 17:03:13 4ee3050cff6b Extracting [=======> ] 32.77kB/230.6kB 17:03:13 4ee3050cff6b Extracting [==================================================>] 230.6kB/230.6kB 17:03:13 4ee3050cff6b Extracting [==================================================>] 230.6kB/230.6kB 17:03:13 4ee3050cff6b Pull complete 17:03:13 4ee3050cff6b Pull complete 17:03:13 98acab318002 Extracting [> ] 557.1kB/121.9MB 17:03:13 519f42193ec8 Extracting [> ] 557.1kB/121.9MB 17:03:13 98acab318002 Extracting [======> ] 16.15MB/121.9MB 17:03:13 519f42193ec8 Extracting [=====> ] 13.37MB/121.9MB 17:03:13 98acab318002 Extracting [=============> ] 31.75MB/121.9MB 17:03:13 519f42193ec8 Extracting [===========> ] 28.41MB/121.9MB 17:03:13 98acab318002 Extracting [===================> ] 47.35MB/121.9MB 17:03:13 519f42193ec8 Extracting [=================> ] 42.89MB/121.9MB 17:03:13 98acab318002 Extracting [==========================> ] 64.62MB/121.9MB 17:03:13 519f42193ec8 Extracting [========================> ] 60.16MB/121.9MB 17:03:13 98acab318002 Extracting [================================> ] 80.22MB/121.9MB 17:03:13 519f42193ec8 Extracting [===============================> ] 77.99MB/121.9MB 17:03:13 98acab318002 Extracting [=======================================> ] 95.26MB/121.9MB 17:03:13 519f42193ec8 Extracting [=======================================> ] 95.81MB/121.9MB 17:03:13 98acab318002 Extracting [===========================================> ] 105.3MB/121.9MB 17:03:13 519f42193ec8 Extracting [=============================================> ] 110.9MB/121.9MB 17:03:14 519f42193ec8 Extracting [================================================> ] 119.2MB/121.9MB 17:03:14 98acab318002 Extracting [===============================================> ] 117MB/121.9MB 17:03:14 519f42193ec8 Extracting [==================================================>] 121.9MB/121.9MB 17:03:14 98acab318002 Extracting [=================================================> ] 121.4MB/121.9MB 17:03:14 98acab318002 Extracting [==================================================>] 121.9MB/121.9MB 17:03:14 519f42193ec8 Pull complete 17:03:14 5df3538dc51e Extracting [==================================================>] 3.627kB/3.627kB 17:03:14 5df3538dc51e Extracting [==================================================>] 3.627kB/3.627kB 17:03:14 98acab318002 Pull complete 17:03:14 878348106a95 Extracting [==================================================>] 3.447kB/3.447kB 17:03:14 878348106a95 Extracting [==================================================>] 3.447kB/3.447kB 17:03:14 5df3538dc51e Pull complete 17:03:14 kafka Pulled 17:03:14 878348106a95 Pull complete 17:03:14 zookeeper Pulled 17:03:14 Network compose_default Creating 17:03:14 Network compose_default Created 17:03:14 Container prometheus Creating 17:03:14 Container mariadb Creating 17:03:14 Container zookeeper Creating 17:03:14 Container simulator Creating 17:03:25 Container zookeeper Created 17:03:25 Container kafka Creating 17:03:25 Container simulator Created 17:03:25 Container mariadb Created 17:03:25 Container prometheus Created 17:03:25 Container policy-db-migrator Creating 17:03:25 Container grafana Creating 17:03:25 Container grafana Created 17:03:25 Container policy-db-migrator Created 17:03:25 Container policy-api Creating 17:03:25 Container kafka Created 17:03:25 Container policy-api Created 17:03:25 Container policy-pap Creating 17:03:25 Container policy-pap Created 17:03:25 Container policy-apex-pdp Creating 17:03:25 Container policy-apex-pdp Created 17:03:25 Container prometheus Starting 17:03:25 Container mariadb Starting 17:03:25 Container simulator Starting 17:03:25 Container zookeeper Starting 17:03:26 Container prometheus Started 17:03:26 Container grafana Starting 17:03:27 Container grafana Started 17:03:27 Container simulator Started 17:03:28 Container mariadb Started 17:03:28 Container policy-db-migrator Starting 17:03:29 Container policy-db-migrator Started 17:03:29 Container policy-api Starting 17:03:30 Container policy-api Started 17:03:30 Container zookeeper Started 17:03:30 Container kafka Starting 17:03:31 Container kafka Started 17:03:31 Container policy-pap Starting 17:03:32 Container policy-pap Started 17:03:32 Container policy-apex-pdp Starting 17:03:33 Container policy-apex-pdp Started 17:03:33 Prometheus server: http://localhost:30259 17:03:33 Grafana server: http://localhost:30269 17:03:43 Waiting for REST to come up on localhost port 30003... 17:03:43 NAMES STATUS 17:03:43 policy-apex-pdp Up 10 seconds 17:03:43 policy-pap Up 10 seconds 17:03:43 policy-api Up 12 seconds 17:03:43 kafka Up 11 seconds 17:03:43 grafana Up 15 seconds 17:03:43 zookeeper Up 12 seconds 17:03:43 mariadb Up 14 seconds 17:03:43 simulator Up 15 seconds 17:03:43 prometheus Up 16 seconds 17:03:48 NAMES STATUS 17:03:48 policy-apex-pdp Up 15 seconds 17:03:48 policy-pap Up 15 seconds 17:03:48 policy-api Up 18 seconds 17:03:48 kafka Up 16 seconds 17:03:48 grafana Up 20 seconds 17:03:48 zookeeper Up 17 seconds 17:03:48 mariadb Up 19 seconds 17:03:48 simulator Up 20 seconds 17:03:48 prometheus Up 21 seconds 17:03:53 NAMES STATUS 17:03:53 policy-apex-pdp Up 20 seconds 17:03:53 policy-pap Up 21 seconds 17:03:53 policy-api Up 23 seconds 17:03:53 kafka Up 21 seconds 17:03:53 grafana Up 25 seconds 17:03:53 zookeeper Up 22 seconds 17:03:53 mariadb Up 24 seconds 17:03:53 simulator Up 25 seconds 17:03:53 prometheus Up 26 seconds 17:03:58 NAMES STATUS 17:03:58 policy-apex-pdp Up 25 seconds 17:03:58 policy-pap Up 26 seconds 17:03:58 policy-api Up 28 seconds 17:03:58 kafka Up 26 seconds 17:03:58 grafana Up 31 seconds 17:03:58 zookeeper Up 27 seconds 17:03:58 mariadb Up 29 seconds 17:03:58 simulator Up 30 seconds 17:03:58 prometheus Up 31 seconds 17:04:03 NAMES STATUS 17:04:03 policy-apex-pdp Up 30 seconds 17:04:03 policy-pap Up 31 seconds 17:04:03 policy-api Up 33 seconds 17:04:03 kafka Up 32 seconds 17:04:03 grafana Up 36 seconds 17:04:03 zookeeper Up 32 seconds 17:04:03 mariadb Up 34 seconds 17:04:03 simulator Up 35 seconds 17:04:03 prometheus Up 36 seconds 17:04:08 NAMES STATUS 17:04:08 policy-apex-pdp Up 35 seconds 17:04:08 policy-pap Up 36 seconds 17:04:08 policy-api Up 38 seconds 17:04:08 kafka Up 37 seconds 17:04:08 grafana Up 41 seconds 17:04:08 zookeeper Up 37 seconds 17:04:08 mariadb Up 39 seconds 17:04:08 simulator Up 40 seconds 17:04:08 prometheus Up 41 seconds 17:04:08 Build docker image for robot framework 17:04:08 Error: No such image: policy-csit-robot 17:04:08 Cloning into '/w/workspace/policy-pap-newdelhi-project-csit-pap/csit/resources/tests/models'... 17:04:10 Build robot framework docker image 17:04:10 Sending build context to Docker daemon 16.14MB 17:04:10 Step 1/9 : FROM nexus3.onap.org:10001/library/python:3.10-slim-bullseye 17:04:10 3.10-slim-bullseye: Pulling from library/python 17:04:10 6dce3b49cfe6: Pulling fs layer 17:04:10 1988f4791208: Pulling fs layer 17:04:10 214b2d429b7d: Pulling fs layer 17:04:10 ff9d21b2f97c: Pulling fs layer 17:04:10 ff9d21b2f97c: Waiting 17:04:10 1988f4791208: Verifying Checksum 17:04:10 1988f4791208: Download complete 17:04:10 ff9d21b2f97c: Verifying Checksum 17:04:10 ff9d21b2f97c: Download complete 17:04:10 214b2d429b7d: Verifying Checksum 17:04:10 214b2d429b7d: Download complete 17:04:11 6dce3b49cfe6: Verifying Checksum 17:04:11 6dce3b49cfe6: Download complete 17:04:12 6dce3b49cfe6: Pull complete 17:04:12 1988f4791208: Pull complete 17:04:13 214b2d429b7d: Pull complete 17:04:13 ff9d21b2f97c: Pull complete 17:04:13 Digest: sha256:2674abed3e7ffff21501d1a5ca773920c2ed6d2087a871fd07799ff029c909c2 17:04:13 Status: Downloaded newer image for nexus3.onap.org:10001/library/python:3.10-slim-bullseye 17:04:13 ---> 436e93e9085b 17:04:13 Step 2/9 : ARG CSIT_SCRIPT=${CSIT_SCRIPT} 17:04:14 ---> Running in 1916649a29cc 17:04:15 Removing intermediate container 1916649a29cc 17:04:15 ---> 723fdef16ebd 17:04:15 Step 3/9 : ARG ROBOT_FILE=${ROBOT_FILE} 17:04:15 ---> Running in 6ce2171a9202 17:04:15 Removing intermediate container 6ce2171a9202 17:04:15 ---> 289b731a2e89 17:04:15 Step 4/9 : ENV ROBOT_WORKSPACE=/opt/robotworkspace ROBOT_FILE=$ROBOT_FILE CLAMP_K8S_TEST=$CLAMP_K8S_TEST 17:04:15 ---> Running in 4ab9fba5951c 17:04:15 Removing intermediate container 4ab9fba5951c 17:04:15 ---> 14e0945806ac 17:04:15 Step 5/9 : RUN python3 -m pip -qq install --upgrade pip && python3 -m pip -qq install --upgrade --extra-index-url="https://nexus3.onap.org/repository/PyPi.staging/simple" 'robotframework-onap==0.6.0.*' --pre && python3 -m pip -qq install --upgrade confluent-kafka && python3 -m pip freeze 17:04:15 ---> Running in f4b041cd6250 17:04:27 bcrypt==4.2.0 17:04:27 certifi==2024.8.30 17:04:27 cffi==1.17.1 17:04:27 charset-normalizer==3.4.0 17:04:27 confluent-kafka==2.6.0 17:04:27 cryptography==43.0.3 17:04:27 decorator==5.1.1 17:04:27 deepdiff==8.0.1 17:04:27 dnspython==2.7.0 17:04:27 future==1.0.0 17:04:27 idna==3.10 17:04:27 Jinja2==3.1.4 17:04:27 jsonpath-rw==1.4.0 17:04:27 kafka-python==2.0.2 17:04:27 MarkupSafe==3.0.2 17:04:27 more-itertools==5.0.0 17:04:27 orderly-set==5.2.2 17:04:27 paramiko==3.5.0 17:04:27 pbr==6.1.0 17:04:27 ply==3.11 17:04:27 protobuf==5.29.0rc2 17:04:27 pycparser==2.22 17:04:27 PyNaCl==1.5.0 17:04:27 PyYAML==6.0.2 17:04:27 requests==2.32.3 17:04:27 robotframework==7.1.1 17:04:27 robotframework-onap==0.6.0.dev105 17:04:27 robotframework-requests==1.0a12 17:04:27 robotlibcore-temp==1.0.2 17:04:27 six==1.16.0 17:04:27 urllib3==2.2.3 17:04:30 Removing intermediate container f4b041cd6250 17:04:30 ---> 2e1229b32d02 17:04:30 Step 6/9 : RUN mkdir -p ${ROBOT_WORKSPACE} 17:04:30 ---> Running in 639c49a051da 17:04:31 Removing intermediate container 639c49a051da 17:04:31 ---> 24b034d0db46 17:04:31 Step 7/9 : COPY scripts/run-test.sh tests/ ${ROBOT_WORKSPACE}/ 17:04:33 ---> bfdcc516e51b 17:04:33 Step 8/9 : WORKDIR ${ROBOT_WORKSPACE} 17:04:33 ---> Running in f0ece555e926 17:04:33 Removing intermediate container f0ece555e926 17:04:33 ---> 030bd24c71df 17:04:33 Step 9/9 : CMD ["sh", "-c", "./run-test.sh" ] 17:04:33 ---> Running in cc6a0249472b 17:04:33 Removing intermediate container cc6a0249472b 17:04:33 ---> 93ecea41e42f 17:04:33 Successfully built 93ecea41e42f 17:04:33 Successfully tagged policy-csit-robot:latest 17:04:36 top - 17:04:36 up 3 min, 0 users, load average: 2.75, 1.53, 0.61 17:04:36 Tasks: 209 total, 1 running, 131 sleeping, 0 stopped, 0 zombie 17:04:36 %Cpu(s): 15.2 us, 3.5 sy, 0.0 ni, 75.9 id, 5.2 wa, 0.0 hi, 0.1 si, 0.1 st 17:04:36 17:04:36 total used free shared buff/cache available 17:04:36 Mem: 31G 2.8G 22G 1.3M 6.2G 28G 17:04:36 Swap: 1.0G 0B 1.0G 17:04:36 17:04:36 NAMES STATUS 17:04:36 policy-apex-pdp Up About a minute 17:04:36 policy-pap Up About a minute 17:04:36 policy-api Up About a minute 17:04:36 kafka Up About a minute 17:04:36 grafana Up About a minute 17:04:36 zookeeper Up About a minute 17:04:36 mariadb Up About a minute 17:04:36 simulator Up About a minute 17:04:36 prometheus Up About a minute 17:04:36 17:04:38 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 17:04:38 2c6d6dfc4087 policy-apex-pdp 1.36% 172.8MiB / 31.41GiB 0.54% 26kB / 29.3kB 0B / 0B 49 17:04:38 79c44506d103 policy-pap 1.88% 501.9MiB / 31.41GiB 1.56% 108kB / 102kB 0B / 149MB 64 17:04:38 2f4bac749246 policy-api 0.12% 548.8MiB / 31.41GiB 1.71% 988kB / 647kB 0B / 0B 55 17:04:38 254b431a943f kafka 4.12% 388.9MiB / 31.41GiB 1.21% 125kB / 124kB 8.19kB / 532kB 87 17:04:38 9f8b0599c35c grafana 0.09% 66.73MiB / 31.41GiB 0.21% 8.41MB / 31.7kB 0B / 27.5MB 20 17:04:38 72b62eaaaa41 zookeeper 0.06% 86.98MiB / 31.41GiB 0.27% 54.1kB / 47.5kB 0B / 410kB 63 17:04:38 685fb5a0db0a mariadb 0.02% 103.1MiB / 31.41GiB 0.32% 969kB / 1.22MB 11.2MB / 71.5MB 31 17:04:38 664f262cad31 simulator 0.07% 123MiB / 31.41GiB 0.38% 1.61kB / 0B 0B / 0B 77 17:04:38 43b16c268efa prometheus 0.06% 19.51MiB / 31.41GiB 0.06% 2.09kB / 474B 49.2kB / 0B 13 17:04:38 17:04:39 Container policy-csit Creating 17:04:39 Container policy-csit Created 17:04:39 Attaching to policy-csit 17:04:40 policy-csit | Invoking the robot tests from: pap-test.robot pap-slas.robot 17:04:40 policy-csit | Run Robot test 17:04:40 policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies 17:04:40 policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates 17:04:40 policy-csit | -v POLICY_API_IP:policy-api:6969 17:04:40 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 17:04:40 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 17:04:40 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 17:04:40 policy-csit | -v APEX_IP:policy-apex-pdp:6969 17:04:40 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 17:04:40 policy-csit | -v KAFKA_IP:kafka:9092 17:04:40 policy-csit | -v PROMETHEUS_IP:prometheus:9090 17:04:40 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 17:04:40 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 17:04:40 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 17:04:40 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 17:04:40 policy-csit | -v TEMP_FOLDER:/tmp/distribution 17:04:40 policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 17:04:40 policy-csit | -v CLAMP_K8S_TEST: 17:04:40 policy-csit | Starting Robot test suites ... 17:04:40 policy-csit | ============================================================================== 17:04:40 policy-csit | Pap-Test & Pap-Slas 17:04:40 policy-csit | ============================================================================== 17:04:40 policy-csit | Pap-Test & Pap-Slas.Pap-Test 17:04:40 policy-csit | ============================================================================== 17:04:41 policy-csit | LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | 17:04:41 policy-csit | ------------------------------------------------------------------------------ 17:04:41 policy-csit | LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | 17:04:41 policy-csit | ------------------------------------------------------------------------------ 17:04:42 policy-csit | LoadNodeTemplates :: Create node templates in database using speci... | PASS | 17:04:42 policy-csit | ------------------------------------------------------------------------------ 17:04:42 policy-csit | Healthcheck :: Verify policy pap health check | PASS | 17:04:42 policy-csit | ------------------------------------------------------------------------------ 17:05:02 policy-csit | Consolidated Healthcheck :: Verify policy consolidated health check | PASS | 17:05:02 policy-csit | ------------------------------------------------------------------------------ 17:05:02 policy-csit | Metrics :: Verify policy pap is exporting prometheus metrics | PASS | 17:05:02 policy-csit | ------------------------------------------------------------------------------ 17:05:03 policy-csit | AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | 17:05:03 policy-csit | ------------------------------------------------------------------------------ 17:05:03 policy-csit | QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | 17:05:03 policy-csit | ------------------------------------------------------------------------------ 17:05:03 policy-csit | ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | 17:05:03 policy-csit | ------------------------------------------------------------------------------ 17:05:03 policy-csit | QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | 17:05:03 policy-csit | ------------------------------------------------------------------------------ 17:05:04 policy-csit | DeployPdpGroups :: Deploy policies in PdpGroups | PASS | 17:05:04 policy-csit | ------------------------------------------------------------------------------ 17:05:04 policy-csit | QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | 17:05:04 policy-csit | ------------------------------------------------------------------------------ 17:05:04 policy-csit | QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | 17:05:04 policy-csit | ------------------------------------------------------------------------------ 17:05:04 policy-csit | QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | 17:05:04 policy-csit | ------------------------------------------------------------------------------ 17:05:04 policy-csit | UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | 17:05:04 policy-csit | ------------------------------------------------------------------------------ 17:05:05 policy-csit | UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | 17:05:05 policy-csit | ------------------------------------------------------------------------------ 17:05:05 policy-csit | QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | 17:05:05 policy-csit | ------------------------------------------------------------------------------ 17:05:05 policy-csit | QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | PASS | 17:05:05 policy-csit | ------------------------------------------------------------------------------ 17:05:05 policy-csit | QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | 17:05:05 policy-csit | ------------------------------------------------------------------------------ 17:05:05 policy-csit | DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | 17:05:05 policy-csit | ------------------------------------------------------------------------------ 17:05:05 policy-csit | DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | 17:05:05 policy-csit | ------------------------------------------------------------------------------ 17:05:06 policy-csit | QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | 17:05:06 policy-csit | ------------------------------------------------------------------------------ 17:05:06 policy-csit | Pap-Test & Pap-Slas.Pap-Test | PASS | 17:05:06 policy-csit | 22 tests, 22 passed, 0 failed 17:05:06 policy-csit | ============================================================================== 17:05:06 policy-csit | Pap-Test & Pap-Slas.Pap-Slas 17:05:06 policy-csit | ============================================================================== 17:06:06 policy-csit | WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | 17:06:06 policy-csit | ------------------------------------------------------------------------------ 17:06:06 policy-csit | ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | 17:06:06 policy-csit | ------------------------------------------------------------------------------ 17:06:06 policy-csit | ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | 17:06:06 policy-csit | ------------------------------------------------------------------------------ 17:06:06 policy-csit | ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | 17:06:06 policy-csit | ------------------------------------------------------------------------------ 17:06:06 policy-csit | ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | 17:06:06 policy-csit | ------------------------------------------------------------------------------ 17:06:06 policy-csit | ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | 17:06:06 policy-csit | ------------------------------------------------------------------------------ 17:06:06 policy-csit | ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | 17:06:06 policy-csit | ------------------------------------------------------------------------------ 17:06:06 policy-csit | ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | 17:06:06 policy-csit | ------------------------------------------------------------------------------ 17:06:06 policy-csit | Pap-Test & Pap-Slas.Pap-Slas | PASS | 17:06:06 policy-csit | 8 tests, 8 passed, 0 failed 17:06:06 policy-csit | ============================================================================== 17:06:06 policy-csit | Pap-Test & Pap-Slas | PASS | 17:06:06 policy-csit | 30 tests, 30 passed, 0 failed 17:06:06 policy-csit | ============================================================================== 17:06:06 policy-csit | Output: /tmp/results/output.xml 17:06:06 policy-csit | Log: /tmp/results/log.html 17:06:06 policy-csit | Report: /tmp/results/report.html 17:06:06 policy-csit | RESULT: 0 17:06:06 policy-csit exited with code 0 17:06:06 NAMES STATUS 17:06:06 policy-apex-pdp Up 2 minutes 17:06:06 policy-pap Up 2 minutes 17:06:06 policy-api Up 2 minutes 17:06:06 kafka Up 2 minutes 17:06:06 grafana Up 2 minutes 17:06:06 zookeeper Up 2 minutes 17:06:06 mariadb Up 2 minutes 17:06:06 simulator Up 2 minutes 17:06:06 prometheus Up 2 minutes 17:06:06 Shut down started! 17:06:08 Collecting logs from docker compose containers... 17:06:12 ======== Logs from grafana ======== 17:06:12 grafana | logger=settings t=2024-10-31T17:03:27.471876935Z level=info msg="Starting Grafana" version=11.3.0 commit=d9455ff7db73b694db7d412e49a68bec767f2b5a branch=HEAD compiled=2024-10-31T17:03:27Z 17:06:12 grafana | logger=settings t=2024-10-31T17:03:27.472289191Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini 17:06:12 grafana | logger=settings t=2024-10-31T17:03:27.472302751Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini 17:06:12 grafana | logger=settings t=2024-10-31T17:03:27.472306601Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" 17:06:12 grafana | logger=settings t=2024-10-31T17:03:27.472310521Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" 17:06:12 grafana | logger=settings t=2024-10-31T17:03:27.472313891Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" 17:06:12 grafana | logger=settings t=2024-10-31T17:03:27.472317261Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" 17:06:12 grafana | logger=settings t=2024-10-31T17:03:27.472321111Z level=info msg="Config overridden from command line" arg="default.log.mode=console" 17:06:12 grafana | logger=settings t=2024-10-31T17:03:27.472325091Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" 17:06:12 grafana | logger=settings t=2024-10-31T17:03:27.472328871Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" 17:06:12 grafana | logger=settings t=2024-10-31T17:03:27.472332372Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 17:06:12 grafana | logger=settings t=2024-10-31T17:03:27.472335882Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 17:06:12 grafana | logger=settings t=2024-10-31T17:03:27.472341232Z level=info msg=Target target=[all] 17:06:12 grafana | logger=settings t=2024-10-31T17:03:27.472348352Z level=info msg="Path Home" path=/usr/share/grafana 17:06:12 grafana | logger=settings t=2024-10-31T17:03:27.472351712Z level=info msg="Path Data" path=/var/lib/grafana 17:06:12 grafana | logger=settings t=2024-10-31T17:03:27.472354992Z level=info msg="Path Logs" path=/var/log/grafana 17:06:12 grafana | logger=settings t=2024-10-31T17:03:27.472357922Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins 17:06:12 grafana | logger=settings t=2024-10-31T17:03:27.472361132Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning 17:06:12 grafana | logger=settings t=2024-10-31T17:03:27.472364342Z level=info msg="App mode production" 17:06:12 grafana | logger=featuremgmt t=2024-10-31T17:03:27.472742728Z level=info msg=FeatureToggles awsAsyncQueryCaching=true lokiQueryHints=true logsInfiniteScrolling=true prometheusAzureOverrideAudience=true logRowsPopoverMenu=true dashboardSceneForViewers=true nestedFolders=true angularDeprecationUI=true ssoSettingsApi=true kubernetesPlaylists=true openSearchBackendFlowEnabled=true managedPluginsInstall=true influxdbBackendMigration=true alertingSimplifiedRouting=true addFieldFromCalculationStatFunctions=true pinNavItems=true prometheusMetricEncyclopedia=true correlations=true annotationPermissionUpdate=true publicDashboardsScene=true dashgpt=true cloudWatchNewLabelParsing=true dashboardScene=true cloudWatchRoundUpEndTime=true prometheusConfigOverhaulAuth=true dataplaneFrontendFallback=true dashboardSceneSolo=true alertingInsights=true publicDashboards=true recordedQueriesMulti=true exploreMetrics=true lokiMetricDataplane=true transformationsVariableSupport=true tlsMemcached=true lokiQuerySplitting=true topnav=true logsContextDatasourceUi=true groupToNestedTableTransformation=true formatString=true cloudWatchCrossAccountQuerying=true accessControlOnCall=true promQLScope=true autoMigrateXYChartPanel=true alertingNoDataErrorExecution=true recoveryThreshold=true logsExploreTableVisualisation=true lokiStructuredMetadata=true notificationBanner=true panelMonitoring=true transformationsRedesign=true 17:06:12 grafana | logger=sqlstore t=2024-10-31T17:03:27.472802979Z level=info msg="Connecting to DB" dbtype=sqlite3 17:06:12 grafana | logger=sqlstore t=2024-10-31T17:03:27.47289027Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.47427954Z level=info msg="Locking database" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.47429045Z level=info msg="Starting DB migrations" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.474881418Z level=info msg="Executing migration" id="create migration_log table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.475768301Z level=info msg="Migration successfully executed" id="create migration_log table" duration=886.383µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.481850598Z level=info msg="Executing migration" id="create user table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.48268525Z level=info msg="Migration successfully executed" id="create user table" duration=833.992µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.487862264Z level=info msg="Executing migration" id="add unique index user.login" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.488612905Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=750.541µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.491946043Z level=info msg="Executing migration" id="add unique index user.email" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.492628572Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=682.259µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.497929619Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.498656779Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=725.92µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.501956855Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.50291468Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=957.855µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.506271268Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.509366462Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=3.098484ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.514538456Z level=info msg="Executing migration" id="create user table v2" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.515338298Z level=info msg="Migration successfully executed" id="create user table v2" duration=800.412µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.518597534Z level=info msg="Executing migration" id="create index UQE_user_login - v2" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.519252234Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=652.419µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.523213051Z level=info msg="Executing migration" id="create index UQE_user_email - v2" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.523933941Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=720.71µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.529195616Z level=info msg="Executing migration" id="copy data_source v1 to v2" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.529769734Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=572.358µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.53294692Z level=info msg="Executing migration" id="Drop old table user_v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.533692381Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=745.601µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.536998227Z level=info msg="Executing migration" id="Add column help_flags1 to user table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.538615571Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.613564ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.541874407Z level=info msg="Executing migration" id="Update user table charset" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.541913599Z level=info msg="Migration successfully executed" id="Update user table charset" duration=40.472µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.547174584Z level=info msg="Executing migration" id="Add last_seen_at column to user" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.548234579Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.060036ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.55109052Z level=info msg="Executing migration" id="Add missing user data" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.551400154Z level=info msg="Migration successfully executed" id="Add missing user data" duration=309.944µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.554844114Z level=info msg="Executing migration" id="Add is_disabled column to user" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.556288994Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.44748ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.559201866Z level=info msg="Executing migration" id="Add index user.login/user.email" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.559946546Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=744.58µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.565004759Z level=info msg="Executing migration" id="Add is_service_account column to user" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.566095875Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.090516ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.569080957Z level=info msg="Executing migration" id="Update is_service_account column to nullable" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.576863319Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=7.782002ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.579910053Z level=info msg="Executing migration" id="Add uid column to user" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.581004468Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=1.094335ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.584087613Z level=info msg="Executing migration" id="Update uid column values for users" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.584472138Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=357.544µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.589991627Z level=info msg="Executing migration" id="Add unique index user_uid" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.590710547Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=718.4µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.593822012Z level=info msg="Executing migration" id="update login field with orgid to allow for multiple service accounts with same name across orgs" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.594159447Z level=info msg="Migration successfully executed" id="update login field with orgid to allow for multiple service accounts with same name across orgs" duration=338.155µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.597258322Z level=info msg="Executing migration" id="update login and email fields to lowercase" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.597603956Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase" duration=343.624µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.60275591Z level=info msg="Executing migration" id="update login and email fields to lowercase2" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.603034894Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase2" duration=278.285µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.606161308Z level=info msg="Executing migration" id="create temp user table v1-7" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.60695342Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=792.082µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.609856242Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.610551382Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=694.73µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.613463173Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.614137863Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=674.87µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.619163435Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.619861125Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=697.75µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.624321719Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.625200661Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=879.042µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.628513289Z level=info msg="Executing migration" id="Update temp_user table charset" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.628544729Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=32.25µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.634010407Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.634651267Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=641.16µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.63771118Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.638326619Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=615.449µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.641354263Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.641996602Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=642.299µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.64747431Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.648116049Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=641.739µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.654014524Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.656931246Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=2.918092ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.659876868Z level=info msg="Executing migration" id="create temp_user v2" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.660457937Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=582.739µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.665738262Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.666421332Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=682.65µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.669577697Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.670270337Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=694.82µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.673333651Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.67400556Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=671.469µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.679224305Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.679993196Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=768.191µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.683084291Z level=info msg="Executing migration" id="copy temp_user v1 to v2" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.683433166Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=349.065µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.715428804Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.715985932Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=555.259µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.719305289Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.719676854Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=371.575µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.725390747Z level=info msg="Executing migration" id="create star table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.725979805Z level=info msg="Migration successfully executed" id="create star table" duration=589.438µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.729346663Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.730063313Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=716.4µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.733581324Z level=info msg="Executing migration" id="create org table v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.734350225Z level=info msg="Migration successfully executed" id="create org table v1" duration=768.831µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.739360807Z level=info msg="Executing migration" id="create index UQE_org_name - v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.740133908Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=772.601µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.743247232Z level=info msg="Executing migration" id="create org_user table v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.743941932Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=692.64µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.747028036Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.747726196Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=697.57µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.751150235Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.751902246Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=751.601µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.756986199Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.75768786Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=701.24µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.760653242Z level=info msg="Executing migration" id="Update org table charset" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.760681182Z level=info msg="Migration successfully executed" id="Update org table charset" duration=28.58µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.763589204Z level=info msg="Executing migration" id="Update org_user table charset" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.763617714Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=26.601µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.766571556Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.76674686Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=175.374µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.771439746Z level=info msg="Executing migration" id="create dashboard table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.772194617Z level=info msg="Migration successfully executed" id="create dashboard table" duration=754.471µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.775044158Z level=info msg="Executing migration" id="add index dashboard.account_id" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.7758691Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=824.082µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.778837042Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.779650724Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=812.952µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.782693327Z level=info msg="Executing migration" id="create dashboard_tag table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.783332056Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=638.169µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.788228436Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.789001947Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=773.021µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.791983041Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.792673141Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=689.4µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.795836546Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.800780976Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=4.94305ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.805839418Z level=info msg="Executing migration" id="create dashboard v2" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.806565Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=725.112µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.809466071Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.810165701Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=699.23µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.813536369Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.81425561Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=718.701µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.819048878Z level=info msg="Executing migration" id="copy dashboard v1 to v2" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.819358433Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=309.235µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.822332875Z level=info msg="Executing migration" id="drop table dashboard_v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.823082356Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=749.131µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.826024028Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.826086689Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=62.901µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.831042079Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.832771205Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=1.728586ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.835455224Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.837200828Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.745134ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.840532966Z level=info msg="Executing migration" id="Add column gnetId in dashboard" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.8422211Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.687534ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.84644448Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.847200402Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=755.502µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.85061871Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.852463377Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.843707ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.855651913Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.856404744Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=750.121µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.86039843Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.861128651Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=729.741µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.864226166Z level=info msg="Executing migration" id="Update dashboard table charset" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.864249266Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=23.75µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.867569304Z level=info msg="Executing migration" id="Update dashboard_tag table charset" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.867595654Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=26.76µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.871911466Z level=info msg="Executing migration" id="Add column folder_id in dashboard" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.873733691Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=1.820016ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.877001158Z level=info msg="Executing migration" id="Add column isFolder in dashboard" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.878880896Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=1.879448ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.882087301Z level=info msg="Executing migration" id="Add column has_acl in dashboard" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.883979938Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=1.892717ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.887995726Z level=info msg="Executing migration" id="Add column uid in dashboard" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.889801721Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=1.805745ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.892968487Z level=info msg="Executing migration" id="Update uid column values in dashboard" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.893158849Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=193.442µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.895708826Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.896453487Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=744.371µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.899759205Z level=info msg="Executing migration" id="Remove unique index org_id_slug" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.900445485Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=685.27µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.904649375Z level=info msg="Executing migration" id="Update dashboard title length" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.904672565Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=23.93µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.908760724Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.909526385Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=764.991µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.912781731Z level=info msg="Executing migration" id="create dashboard_provisioning" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.91343377Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=651.969µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.917338506Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.922653582Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=5.314926ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.925899069Z level=info msg="Executing migration" id="create dashboard_provisioning v2" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.926615509Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=716.25µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.929791525Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.930532656Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=740.971µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.934530772Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.935282143Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=751.101µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.938614931Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.938904495Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=289.524µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.94202458Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.942504167Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=479.697µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.946592216Z level=info msg="Executing migration" id="Add check_sum column" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.948575063Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=1.985297ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.951656658Z level=info msg="Executing migration" id="Add index for dashboard_title" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.95246993Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=812.871µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.955688826Z level=info msg="Executing migration" id="delete tags for deleted dashboards" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.955865138Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=176.492µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.960101099Z level=info msg="Executing migration" id="delete stars for deleted dashboards" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.960264361Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=162.702µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.963448517Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.964229828Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=781.671µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.967299722Z level=info msg="Executing migration" id="Add isPublic for dashboard" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.969382822Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.08295ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.972808991Z level=info msg="Executing migration" id="Add deleted for dashboard" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.97480946Z level=info msg="Migration successfully executed" id="Add deleted for dashboard" duration=2.000179ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.979114781Z level=info msg="Executing migration" id="Add index for deleted" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.979877932Z level=info msg="Migration successfully executed" id="Add index for deleted" duration=762.861µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.98318793Z level=info msg="Executing migration" id="create data_source table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.984082922Z level=info msg="Migration successfully executed" id="create data_source table" duration=896.252µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.987463981Z level=info msg="Executing migration" id="add index data_source.account_id" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.988268282Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=803.771µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.992551594Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.993311184Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=760.831µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.996615842Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:27.997298232Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=681.44µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.000735951Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.00139397Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=654.599µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.005593582Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.01151787Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=5.923528ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.014777959Z level=info msg="Executing migration" id="create data_source table v2" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.015622631Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=844.182µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.019439198Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.02022095Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=780.832µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.023564118Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.02431859Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=753.472µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.02770901Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.028267138Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=557.218µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.03245361Z level=info msg="Executing migration" id="Add column with_credentials" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.034671853Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=2.217412ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.037930811Z level=info msg="Executing migration" id="Add secure json data column" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.040198614Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.265253ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.07909542Z level=info msg="Executing migration" id="Update data_source table charset" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.07912381Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=28.74µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.083719247Z level=info msg="Executing migration" id="Update initial version to 1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.083926881Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=207.094µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.087152448Z level=info msg="Executing migration" id="Add read_only data column" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.089426613Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=2.273035ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.092800992Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.092981425Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=180.053µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.096389996Z level=info msg="Executing migration" id="Update json_data with nulls" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.096556518Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=167.082µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.10078882Z level=info msg="Executing migration" id="Add uid column" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.103023343Z level=info msg="Migration successfully executed" id="Add uid column" duration=2.234653ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.106430364Z level=info msg="Executing migration" id="Update uid value" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.106601786Z level=info msg="Migration successfully executed" id="Update uid value" duration=171.372µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.109984736Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.110729067Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=744.011µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.114731846Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.115456957Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=724.931µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.11907308Z level=info msg="Executing migration" id="Add is_prunable column" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.121456215Z level=info msg="Migration successfully executed" id="Add is_prunable column" duration=2.382755ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.124744324Z level=info msg="Executing migration" id="Add api_version column" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.127034278Z level=info msg="Migration successfully executed" id="Add api_version column" duration=2.289374ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.130513249Z level=info msg="Executing migration" id="create api_key table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.13124903Z level=info msg="Migration successfully executed" id="create api_key table" duration=735.441µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.135654996Z level=info msg="Executing migration" id="add index api_key.account_id" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.136409857Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=754.501µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.139670804Z level=info msg="Executing migration" id="add index api_key.key" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.140500487Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=829.523µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.14470128Z level=info msg="Executing migration" id="add index api_key.account_id_name" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.145452311Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=751.041µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.148534596Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.149219526Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=684.71µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.152667597Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.153354487Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=684.11µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.157400897Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.158483493Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=1.081786ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.162476012Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.168870456Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=6.393974ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.172612761Z level=info msg="Executing migration" id="create api_key table v2" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.173432834Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=823.503µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.178064712Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.178679701Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=614.899µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.181877189Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.182687321Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=810.332µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.18603933Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.186870253Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=830.753µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.191172886Z level=info msg="Executing migration" id="copy api_key v1 to v2" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.191805855Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=632.089µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.195606881Z level=info msg="Executing migration" id="Drop old table api_key_v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.196550376Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=942.225µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.200076558Z level=info msg="Executing migration" id="Update api_key table charset" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.200103718Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=34.65µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.204370451Z level=info msg="Executing migration" id="Add expires to api_key table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.206898108Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=2.527677ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.210221277Z level=info msg="Executing migration" id="Add service account foreign key" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.212763095Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.541458ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.216144915Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.216363538Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=218.173µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.219789859Z level=info msg="Executing migration" id="Add last_used_at to api_key table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.222368527Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=2.578067ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.226528438Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.229106306Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.576958ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.232266944Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.233036365Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=769.031µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.236414255Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.236988143Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=573.028µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.241589711Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.242941251Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=1.35059ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.246631035Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.247940995Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=1.30997ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.251651499Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.252478322Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=827.192µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.256577032Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.257440736Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=863.544µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.260589262Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.260687843Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=98.101µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.264132594Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.264159204Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=27.68µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.268569859Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.272875244Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=4.304415ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.276578129Z level=info msg="Executing migration" id="Add encrypted dashboard json column" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.279262019Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.68322ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.28270604Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.28277098Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=65.63µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.286593006Z level=info msg="Executing migration" id="create quota table v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.287383458Z level=info msg="Migration successfully executed" id="create quota table v1" duration=790.832µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.291709753Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.292641066Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=930.063µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.296759787Z level=info msg="Executing migration" id="Update quota table charset" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.296786688Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=27.481µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.30030537Z level=info msg="Executing migration" id="create plugin_setting table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.301545898Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=1.238368ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.306182366Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.307637507Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=1.454771ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.31116494Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.314047292Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=2.879212ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.31727882Z level=info msg="Executing migration" id="Update plugin_setting table charset" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.317305221Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=25.081µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.321536033Z level=info msg="Executing migration" id="create session table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.322331625Z level=info msg="Migration successfully executed" id="create session table" duration=795.032µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.325967119Z level=info msg="Executing migration" id="Drop old table playlist table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.326081631Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=113.962µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.329815026Z level=info msg="Executing migration" id="Drop old table playlist_item table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.330037669Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=221.883µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.334048869Z level=info msg="Executing migration" id="create playlist table v2" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.335203736Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=1.157818ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.339919535Z level=info msg="Executing migration" id="create playlist item table v2" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.340718728Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=798.643µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.343941785Z level=info msg="Executing migration" id="Update playlist table charset" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.343966315Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=25.29µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.347381516Z level=info msg="Executing migration" id="Update playlist_item table charset" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.347418336Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=38.05µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.354499611Z level=info msg="Executing migration" id="Add playlist column created_at" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.357064079Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=2.568548ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.362178974Z level=info msg="Executing migration" id="Add playlist column updated_at" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.364457128Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=2.277744ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.367618255Z level=info msg="Executing migration" id="drop preferences table v2" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.367752347Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=133.462µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.370903554Z level=info msg="Executing migration" id="drop preferences table v3" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.371181808Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=278.774µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.37474717Z level=info msg="Executing migration" id="create preferences table v3" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.376396194Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=1.648854ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.381741364Z level=info msg="Executing migration" id="Update preferences table charset" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.381844965Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=103.601µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.383966396Z level=info msg="Executing migration" id="Add column team_id in preferences" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.387112573Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=3.146087ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.420190813Z level=info msg="Executing migration" id="Update team_id column values in preferences" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.420561239Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=371.685µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.426704189Z level=info msg="Executing migration" id="Add column week_start in preferences" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.432133149Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=5.430181ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.435823853Z level=info msg="Executing migration" id="Add column preferences.json_data" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.439418716Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=3.585483ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.442653134Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.442825237Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=173.353µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.446154096Z level=info msg="Executing migration" id="Add preferences index org_id" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.44708957Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=935.834µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.452314327Z level=info msg="Executing migration" id="Add preferences index user_id" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.453688458Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=1.374101ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.45722299Z level=info msg="Executing migration" id="create alert table v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.459570844Z level=info msg="Migration successfully executed" id="create alert table v1" duration=2.343874ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.465908888Z level=info msg="Executing migration" id="add index alert org_id & id " 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.466873063Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=964.075µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.469922628Z level=info msg="Executing migration" id="add index alert state" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.471314479Z level=info msg="Migration successfully executed" id="add index alert state" duration=1.391651ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.476546676Z level=info msg="Executing migration" id="add index alert dashboard_id" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.477527201Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=981.875µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.480962431Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.481734683Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=774.182µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.484648556Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.485552999Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=904.583µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.490567934Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.491162702Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=595.078µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.493808841Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.503052287Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=9.241696ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.506097393Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.506635661Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=539.548µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.512602969Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.51334822Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=747.161µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.51667523Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.517115506Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=440.876µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.520309613Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.521139455Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=829.822µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.526768298Z level=info msg="Executing migration" id="create alert_notification table v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.527642622Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=874.964µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.530500304Z level=info msg="Executing migration" id="Add column is_default" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.537726961Z level=info msg="Migration successfully executed" id="Add column is_default" duration=7.218167ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.543649998Z level=info msg="Executing migration" id="Add column frequency" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.549157959Z level=info msg="Migration successfully executed" id="Add column frequency" duration=5.508391ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.554829493Z level=info msg="Executing migration" id="Add column send_reminder" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.55862029Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.790577ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.561666045Z level=info msg="Executing migration" id="Add column disable_resolve_message" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.565452311Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=3.785776ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.568488226Z level=info msg="Executing migration" id="add index alert_notification org_id & name" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.56944232Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=951.284µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.574609347Z level=info msg="Executing migration" id="Update alert table charset" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.574647457Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=38.49µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.577472359Z level=info msg="Executing migration" id="Update alert_notification table charset" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.577511269Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=42.27µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.580584644Z level=info msg="Executing migration" id="create notification_journal table v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.581867134Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=1.232999ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.585054581Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.585962734Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=908.003µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.592889147Z level=info msg="Executing migration" id="drop alert_notification_journal" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.593993293Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=1.103815ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.597368363Z level=info msg="Executing migration" id="create alert_notification_state table v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.598597211Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=1.224018ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.604241004Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.605201138Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=959.944µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.608093551Z level=info msg="Executing migration" id="Add for to alert table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.61403787Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=5.942089ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.617073394Z level=info msg="Executing migration" id="Add column uid in alert_notification" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.620988052Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=3.914188ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.626557895Z level=info msg="Executing migration" id="Update uid column values in alert_notification" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.626871089Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=310.204µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.629769522Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.630776177Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=1.006625ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.63368618Z level=info msg="Executing migration" id="Remove unique index org_id_name" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.634682454Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=995.494µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.640092284Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.645784468Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=5.696444ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.648456778Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.64861319Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=154.212µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.651395752Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.652414797Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=1.019265ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.657886639Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.658855432Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=928.264µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.662002308Z level=info msg="Executing migration" id="Drop old annotation table v4" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.662132781Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=130.443µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.665411309Z level=info msg="Executing migration" id="create annotation table v5" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.666418834Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=1.007025ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.67157475Z level=info msg="Executing migration" id="add index annotation 0 v3" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.672855399Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=1.282079ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.675767302Z level=info msg="Executing migration" id="add index annotation 1 v3" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.677113052Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=1.34486ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.690937197Z level=info msg="Executing migration" id="add index annotation 2 v3" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.693723178Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=2.788311ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.697294691Z level=info msg="Executing migration" id="add index annotation 3 v3" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.698746492Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.452161ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.703560124Z level=info msg="Executing migration" id="add index annotation 4 v3" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.704935004Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.37273ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.710946552Z level=info msg="Executing migration" id="Update annotation table charset" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.710983603Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=27.711µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.714754559Z level=info msg="Executing migration" id="Add column region_id to annotation table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.719472739Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=4.7172ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.72498438Z level=info msg="Executing migration" id="Drop category_id index" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.726013686Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=1.029226ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.735019749Z level=info msg="Executing migration" id="Add column tags to annotation table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.740719753Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=5.650663ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.777649929Z level=info msg="Executing migration" id="Create annotation_tag table v2" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.778454431Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=807.562µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.781736989Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.782468111Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=733.132µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.786749484Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.787403584Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=654.18µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.790143334Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.808895471Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=18.751237ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.812332732Z level=info msg="Executing migration" id="Create annotation_tag table v3" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.812931191Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=599.089µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.817249015Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.817925425Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=678.03µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.821419686Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.821670611Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=250.694µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.824639164Z level=info msg="Executing migration" id="drop table annotation_tag_v2" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.82502867Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=388.925µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.829838531Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.830224817Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=385.876µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.833509845Z level=info msg="Executing migration" id="Add created time to annotation table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.83783756Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=4.327195ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.841171639Z level=info msg="Executing migration" id="Add updated time to annotation table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.844430487Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=3.263499ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.847620825Z level=info msg="Executing migration" id="Add index for created in annotation table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.848602549Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=981.724µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.852844721Z level=info msg="Executing migration" id="Add index for updated in annotation table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.853735664Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=891.243µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.85681097Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.857043793Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=233.183µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.861190895Z level=info msg="Executing migration" id="Add epoch_end column" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.868084717Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=6.895992ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.871611999Z level=info msg="Executing migration" id="Add index for epoch_end" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.872243748Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=627.099µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.876108715Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.876231787Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=123.482µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.879282062Z level=info msg="Executing migration" id="Move region to single row" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.879851871Z level=info msg="Migration successfully executed" id="Move region to single row" duration=578.889µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.883796669Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.885032428Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=1.235539ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.890692631Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.891742377Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=1.051386ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.897055675Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.898244103Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=1.188578ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.903737784Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.90480675Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=1.066306ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.909771203Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.910632516Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=861.913µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.913202674Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.914380791Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=1.182277ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.917197653Z level=info msg="Executing migration" id="Increase tags column to length 4096" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.917288754Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=91.351µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.923689949Z level=info msg="Executing migration" id="create test_data table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.924651624Z level=info msg="Migration successfully executed" id="create test_data table" duration=962.075µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.9318282Z level=info msg="Executing migration" id="create dashboard_version table v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.932888575Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=1.064395ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.937466323Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.938417197Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=951.284µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.941313369Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.942241144Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=930.485µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.945663195Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.945861657Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=199.132µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.94943112Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.949803816Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=370.696µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.953170025Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.953239256Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=70.311µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.961321886Z level=info msg="Executing migration" id="create team table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.962171118Z level=info msg="Migration successfully executed" id="create team table" duration=850.732µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.965163032Z level=info msg="Executing migration" id="add index team.org_id" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.966150687Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=988.935µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.970831477Z level=info msg="Executing migration" id="add unique index team_org_id_name" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.971998003Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.170076ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.976563502Z level=info msg="Executing migration" id="Add column uid in team" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.981280501Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=4.717399ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.984133774Z level=info msg="Executing migration" id="Update uid column values in team" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.984311556Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=178.232µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.987674566Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.98863621Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=961.784µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.99133283Z level=info msg="Executing migration" id="create team member table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.992151292Z level=info msg="Migration successfully executed" id="create team member table" duration=818.532µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.997509201Z level=info msg="Executing migration" id="add index team_member.org_id" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:28.998392054Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=883.203µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.001401458Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.002310032Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=908.634µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.00495894Z level=info msg="Executing migration" id="add index team_member.team_id" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.005875564Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=916.804µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.01118547Z level=info msg="Executing migration" id="Add column email to team table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.016014699Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=4.829059ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.018641818Z level=info msg="Executing migration" id="Add column external to team_member table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.023410506Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=4.767279ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.02649397Z level=info msg="Executing migration" id="Add column permission to team_member table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.031498562Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=5.005232ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.034254891Z level=info msg="Executing migration" id="add unique index team_member_user_id_org_id" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.035210646Z level=info msg="Migration successfully executed" id="add unique index team_member_user_id_org_id" duration=949.965µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.04180898Z level=info msg="Executing migration" id="create dashboard acl table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.042725393Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=920.753µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.048306113Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.049066234Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=760.601µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.052122089Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.053324995Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.202516ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.057997332Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.059314891Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=1.321319ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.064692799Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.065805184Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=1.103185ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.070842456Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.072241577Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.399401ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.075716026Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.076840083Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=1.123277ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.080913522Z level=info msg="Executing migration" id="add index dashboard_permission" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.081841435Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=927.953µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.085197903Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.085755141Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=555.928µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.123191078Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.123497603Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=306.135µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.127210666Z level=info msg="Executing migration" id="create tag table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.128821209Z level=info msg="Migration successfully executed" id="create tag table" duration=1.609673ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.132911968Z level=info msg="Executing migration" id="add index tag.key_value" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.134031004Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.120176ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.13792363Z level=info msg="Executing migration" id="create login attempt table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.138747841Z level=info msg="Migration successfully executed" id="create login attempt table" duration=823.802µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.144627626Z level=info msg="Executing migration" id="add index login_attempt.username" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.146223179Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=1.598363ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.151069168Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.151949101Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=880.343µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.156691899Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.170936453Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=14.244584ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.175325517Z level=info msg="Executing migration" id="create login_attempt v2" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.175940726Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=582.477µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.180147336Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.18114733Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=999.544µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.189178765Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.189467389Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=289.194µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.193620059Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.194182148Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=562.388µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.199303271Z level=info msg="Executing migration" id="create user auth table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.202349044Z level=info msg="Migration successfully executed" id="create user auth table" duration=3.038073ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.208880159Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.209771701Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=891.532µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.214741992Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.214814233Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=73.311µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.218616727Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.226370079Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=7.753982ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.231602064Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.237098733Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=5.498669ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.240720825Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.245652316Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=4.931061ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.248055001Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.253041712Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=4.986261ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.258854746Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.260029923Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=1.174077ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.2640969Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.271320205Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=7.223895ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.277087707Z level=info msg="Executing migration" id="create server_lock table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.277893018Z level=info msg="Migration successfully executed" id="create server_lock table" duration=805.601µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.284318651Z level=info msg="Executing migration" id="add index server_lock.operation_uid" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.285226664Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=908.214µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.28842304Z level=info msg="Executing migration" id="create user auth token table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.289293762Z level=info msg="Migration successfully executed" id="create user auth token table" duration=871.822µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.293268499Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.294156542Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=887.793µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.299257046Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.301177702Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.920316ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.305805509Z level=info msg="Executing migration" id="add index user_auth_token.user_id" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.30727321Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.467881ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.310820251Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.316131368Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=5.306067ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.321465224Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.322511709Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=1.045725ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.326505787Z level=info msg="Executing migration" id="add external_session_id to user_auth_token" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.334458741Z level=info msg="Migration successfully executed" id="add external_session_id to user_auth_token" duration=7.952784ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.336800994Z level=info msg="Executing migration" id="create cache_data table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.337616865Z level=info msg="Migration successfully executed" id="create cache_data table" duration=815.291µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.344652658Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.345637301Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=984.914µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.350074804Z level=info msg="Executing migration" id="create short_url table v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.351425334Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=1.34957ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.355354231Z level=info msg="Executing migration" id="add index short_url.org_id-uid" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.356383405Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=1.029434ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.361684062Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.361750802Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=67.651µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.366121945Z level=info msg="Executing migration" id="delete alert_definition table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.366284977Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=159.412µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.370171823Z level=info msg="Executing migration" id="recreate alert_definition table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.371652715Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.481092ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.375746713Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.376784898Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.037655ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.381688338Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.382684123Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=995.085µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.387647584Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.387729015Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=82.181µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.391233026Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.392233279Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.000643ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.396963427Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.398148565Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.180238ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.40408467Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.408027486Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=3.947827ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.41245167Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.413486125Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.033925ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.416852494Z level=info msg="Executing migration" id="Add column paused in alert_definition" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.422594646Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=5.742022ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.425391346Z level=info msg="Executing migration" id="drop alert_definition table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.426267308Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=875.742µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.430636111Z level=info msg="Executing migration" id="delete alert_definition_version table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.430717602Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=81.791µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.434229822Z level=info msg="Executing migration" id="recreate alert_definition_version table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.435087225Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=857.313µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.438475994Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.439478528Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.001134ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.477075018Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.478972264Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.896156ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.483133734Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.483200565Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=67.701µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.487127132Z level=info msg="Executing migration" id="drop alert_definition_version table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.488662634Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.503691ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.493688766Z level=info msg="Executing migration" id="create alert_instance table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.49530025Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.610674ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.501045152Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.502132457Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.085155ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.508493069Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.510168773Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.675764ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.514003887Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.520279177Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=6.27442ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.52457394Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.525542153Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=965.433µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.530647036Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.531746982Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=1.100446ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.536924906Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.569328791Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=32.406575ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.57337278Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.60958957Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=36.209689ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.614394389Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.615442003Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=1.047724ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.621511731Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.622475594Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=963.413µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.626963919Z level=info msg="Executing migration" id="add current_reason column related to current_state" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.635872347Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=8.906957ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.641106132Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.64514409Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=4.038108ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.652008059Z level=info msg="Executing migration" id="create alert_rule table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.653044233Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.036254ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.656151838Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.657710361Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.557943ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.662403967Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.663494973Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.090976ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.667963227Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.669308076Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.344579ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.671947365Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.672014195Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=67.211µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.676385448Z level=info msg="Executing migration" id="add column for to alert_rule" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.682632608Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=6.24667ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.690661823Z level=info msg="Executing migration" id="add column annotations to alert_rule" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.700346453Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=9.68273ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.706403899Z level=info msg="Executing migration" id="add column labels to alert_rule" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.712923793Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=6.523744ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.71691867Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.717843804Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=925.234µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.723248011Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.724215645Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=969.224µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.729945487Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.73645127Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=6.502553ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.739666907Z level=info msg="Executing migration" id="add panel_id column to alert_rule" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.746504725Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=6.834228ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.751102581Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.752063144Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=960.593µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.761030103Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.770324237Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=9.287374ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.775904877Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.782160236Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=6.254969ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.785609396Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.785671187Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=62.071µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.790539826Z level=info msg="Executing migration" id="create alert_rule_version table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.792301132Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.760936ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.825290056Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.826928249Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.637884ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.832034513Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.833770887Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.735994ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.838444645Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.838507475Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=63.09µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.843436116Z level=info msg="Executing migration" id="add column for to alert_rule_version" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.85278224Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=9.344104ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.857358596Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.861990192Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=4.633006ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.867989608Z level=info msg="Executing migration" id="add column labels to alert_rule_version" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.877686828Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=9.69723ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.883260148Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.888514663Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=5.253405ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.892584851Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.898816491Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=6.23112ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.905858172Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.905918523Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=63.481µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.910234795Z level=info msg="Executing migration" id=create_alert_configuration_table 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.910982785Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=750.531µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.916513425Z level=info msg="Executing migration" id="Add column default in alert_configuration" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.924979256Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=8.466662ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.933672531Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.933734502Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=65.721µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.938323729Z level=info msg="Executing migration" id="add column org_id in alert_configuration" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.947044493Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=8.722214ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.951752881Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.952633163Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=879.902µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.957775107Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.963941006Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=6.165899ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.968845586Z level=info msg="Executing migration" id=create_ngalert_configuration_table 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.969620237Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=774.541µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.974660689Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.976295533Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.634504ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.98098118Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.989281039Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=8.302029ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.995391847Z level=info msg="Executing migration" id="create provenance_type table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:29.996205509Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=813.552µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.000719444Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.001940601Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.219537ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.007158866Z level=info msg="Executing migration" id="create alert_image table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.008612847Z level=info msg="Migration successfully executed" id="create alert_image table" duration=1.450091ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.013833141Z level=info msg="Executing migration" id="add unique index on token to alert_image table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.014765726Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=932.315µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.019603005Z level=info msg="Executing migration" id="support longer URLs in alert_image table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.019703396Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=100.911µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.024746088Z level=info msg="Executing migration" id=create_alert_configuration_history_table 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.026334091Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=1.588433ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.034062742Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.034963945Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=901.223µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.038178321Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.03880025Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.043516458Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.044150977Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=634.449µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.050579849Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.051561063Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=980.964µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.055764923Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.064690891Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=8.926738ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.068511046Z level=info msg="Executing migration" id="create library_element table v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.069505391Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=994.065µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.077204211Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.078744613Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.539142ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.085060344Z level=info msg="Executing migration" id="create library_element_connection table v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.086414483Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=1.354289ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.091952593Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.092921536Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=965.893µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.098927083Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.100513315Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.585242ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.104201178Z level=info msg="Executing migration" id="increase max description length to 2048" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.104241219Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=36.881µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.108974327Z level=info msg="Executing migration" id="alter library_element model to mediumtext" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.109035127Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=60.76µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.114646868Z level=info msg="Executing migration" id="add library_element folder uid" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.124649032Z level=info msg="Migration successfully executed" id="add library_element folder uid" duration=10.002424ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.128531627Z level=info msg="Executing migration" id="populate library_element folder_uid" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.128927493Z level=info msg="Migration successfully executed" id="populate library_element folder_uid" duration=384.216µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.131939796Z level=info msg="Executing migration" id="add index library_element org_id-folder_uid-name-kind" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.132993502Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_uid-name-kind" duration=1.053646ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.137980243Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.138409109Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=428.796µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.14477353Z level=info msg="Executing migration" id="create data_keys table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.145839196Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.065426ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.181597259Z level=info msg="Executing migration" id="create secrets table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.182437721Z level=info msg="Migration successfully executed" id="create secrets table" duration=847.111µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.188540619Z level=info msg="Executing migration" id="rename data_keys name column to id" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.219157138Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=30.617259ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.277339183Z level=info msg="Executing migration" id="add name column into data_keys" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.287399157Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=10.072295ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.290261677Z level=info msg="Executing migration" id="copy data_keys id column values into name" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.290364869Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=103.502µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.29460587Z level=info msg="Executing migration" id="rename data_keys name column to label" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.326436347Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=31.830397ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.32944979Z level=info msg="Executing migration" id="rename data_keys id column back to name" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.358833832Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=29.384572ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.362252071Z level=info msg="Executing migration" id="create kv_store table v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.363108473Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=856.002µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.366458751Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.367604588Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.145307ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.372482308Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.372788522Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=305.694µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.375820835Z level=info msg="Executing migration" id="create permission table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.376746719Z level=info msg="Migration successfully executed" id="create permission table" duration=924.864µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.379801103Z level=info msg="Executing migration" id="add unique index permission.role_id" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.381497197Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.695644ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.386154563Z level=info msg="Executing migration" id="add unique index role_id_action_scope" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.38800861Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.850767ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.391216706Z level=info msg="Executing migration" id="create role table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.39217013Z level=info msg="Migration successfully executed" id="create role table" duration=952.874µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.395193013Z level=info msg="Executing migration" id="add column display_name" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.402851363Z level=info msg="Migration successfully executed" id="add column display_name" duration=7.64765ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.408043757Z level=info msg="Executing migration" id="add column group_name" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.415200691Z level=info msg="Migration successfully executed" id="add column group_name" duration=7.156664ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.418286375Z level=info msg="Executing migration" id="add index role.org_id" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.419249818Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=963.453µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.422246662Z level=info msg="Executing migration" id="add unique index role_org_id_name" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.423220585Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=973.563µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.428177936Z level=info msg="Executing migration" id="add index role_org_id_uid" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.429187001Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.005595ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.432063503Z level=info msg="Executing migration" id="create team role table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.432916525Z level=info msg="Migration successfully executed" id="create team role table" duration=853.282µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.436058999Z level=info msg="Executing migration" id="add index team_role.org_id" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.437144765Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.085946ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.441899164Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.442975529Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.075915ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.445928722Z level=info msg="Executing migration" id="add index team_role.team_id" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.446970466Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.041584ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.451322019Z level=info msg="Executing migration" id="create user role table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.4521328Z level=info msg="Migration successfully executed" id="create user role table" duration=810.721µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.45629986Z level=info msg="Executing migration" id="add index user_role.org_id" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.457399456Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.099506ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.46049405Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.462166305Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.671385ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.465438251Z level=info msg="Executing migration" id="add index user_role.user_id" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.466560737Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.122586ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.471202084Z level=info msg="Executing migration" id="create builtin role table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.472027106Z level=info msg="Migration successfully executed" id="create builtin role table" duration=824.662µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.475216741Z level=info msg="Executing migration" id="add index builtin_role.role_id" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.476711383Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.494452ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.480870022Z level=info msg="Executing migration" id="add index builtin_role.name" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.482333944Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.463272ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.48839405Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.500307591Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=11.912961ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.535510527Z level=info msg="Executing migration" id="add index builtin_role.org_id" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.536563282Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.054384ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.540886104Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.542471186Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.584483ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.547800863Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.549393846Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.592362ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.555314201Z level=info msg="Executing migration" id="add unique index role.uid" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.557252169Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.941698ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.563051852Z level=info msg="Executing migration" id="create seed assignment table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.563871343Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=817.511µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.566936937Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.568108254Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.170967ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.572150332Z level=info msg="Executing migration" id="add column hidden to role table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.580730315Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=8.577963ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.584232015Z level=info msg="Executing migration" id="permission kind migration" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.593051973Z level=info msg="Migration successfully executed" id="permission kind migration" duration=8.819777ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.597281453Z level=info msg="Executing migration" id="permission attribute migration" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.603628474Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=6.335141ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.609540379Z level=info msg="Executing migration" id="permission identifier migration" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.618076041Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=8.535653ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.621555961Z level=info msg="Executing migration" id="add permission identifier index" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.622301672Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=745.751µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.625614579Z level=info msg="Executing migration" id="add permission action scope role_id index" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.62636776Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=752.981µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.631749857Z level=info msg="Executing migration" id="remove permission role_id action scope index" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.633414981Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.665214ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.636696228Z level=info msg="Executing migration" id="add group mapping UID column to user_role table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.644661472Z level=info msg="Migration successfully executed" id="add group mapping UID column to user_role table" duration=7.966874ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.647680016Z level=info msg="Executing migration" id="add user_role org ID, user ID, role ID, group mapping UID index" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.64866638Z level=info msg="Migration successfully executed" id="add user_role org ID, user ID, role ID, group mapping UID index" duration=986.114µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.654786288Z level=info msg="Executing migration" id="remove user_role org ID, user ID, role ID index" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.656478812Z level=info msg="Migration successfully executed" id="remove user_role org ID, user ID, role ID index" duration=1.693864ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.660263377Z level=info msg="Executing migration" id="create query_history table v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.661653547Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=1.39082ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.666627628Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.667740753Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.113105ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.672824587Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.67301803Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=192.902µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.677112849Z level=info msg="Executing migration" id="create query_history_details table v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.678514859Z level=info msg="Migration successfully executed" id="create query_history_details table v1" duration=1.40127ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.685261385Z level=info msg="Executing migration" id="rbac disabled migrator" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.685296745Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=36.35µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.689027199Z level=info msg="Executing migration" id="teams permissions migration" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.690034383Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=1.006844ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.695263809Z level=info msg="Executing migration" id="dashboard permissions" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.696253993Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=995.504µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.699915085Z level=info msg="Executing migration" id="dashboard permissions uid scopes" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.700753878Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=840.173µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.703898552Z level=info msg="Executing migration" id="drop managed folder create actions" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.704096935Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=198.623µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.708479849Z level=info msg="Executing migration" id="alerting notification permissions" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.708934465Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=455.056µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.714082659Z level=info msg="Executing migration" id="create query_history_star table v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.714909391Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=828.842µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.718086566Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.719585787Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.497341ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.72389253Z level=info msg="Executing migration" id="add column org_id in query_history_star" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.735420715Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=11.528135ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.742430545Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.742555797Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=129.302µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.746993291Z level=info msg="Executing migration" id="create correlation table v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.748201578Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.208187ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.754534879Z level=info msg="Executing migration" id="add index correlations.uid" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.756552078Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=2.019119ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.761219775Z level=info msg="Executing migration" id="add index correlations.source_uid" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.762405362Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.185927ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.765573727Z level=info msg="Executing migration" id="add correlation config column" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.774197971Z level=info msg="Migration successfully executed" id="add correlation config column" duration=8.623294ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.778250969Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.77905752Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=806.421µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.782036104Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.782967757Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=931.053µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.787926318Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.808345041Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=20.417823ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.812314178Z level=info msg="Executing migration" id="create correlation v2" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.81312771Z level=info msg="Migration successfully executed" id="create correlation v2" duration=813.602µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.816286336Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.817063387Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=776.921µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.824112727Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.826049826Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.941218ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.829454474Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.830889304Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.43572ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.833887858Z level=info msg="Executing migration" id="copy correlation v1 to v2" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.834112691Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=224.893µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.839348096Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.840600434Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=1.248417ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.84452182Z level=info msg="Executing migration" id="add provisioning column" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.856108076Z level=info msg="Migration successfully executed" id="add provisioning column" duration=11.591936ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.867086444Z level=info msg="Executing migration" id="add type column" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.873309643Z level=info msg="Migration successfully executed" id="add type column" duration=6.223079ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.87587454Z level=info msg="Executing migration" id="create entity_events table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.876484569Z level=info msg="Migration successfully executed" id="create entity_events table" duration=610.119µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.880610988Z level=info msg="Executing migration" id="create dashboard public config v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.881362039Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=749.221µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.883626071Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.883971536Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.88632401Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.886653924Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.888914107Z level=info msg="Executing migration" id="Drop old dashboard public config table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.889473035Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=558.578µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.893410692Z level=info msg="Executing migration" id="recreate dashboard public config v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.894110852Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=700.12µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.897546631Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.898349903Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=803.542µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.902861437Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.904242827Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.38658ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.907456243Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.909105647Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.649933ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.912995023Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.914581405Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.587492ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.918559553Z level=info msg="Executing migration" id="Drop public config table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.919294723Z level=info msg="Migration successfully executed" id="Drop public config table" duration=735.32µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.922357327Z level=info msg="Executing migration" id="Recreate dashboard public config v2" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.923455423Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.097836ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.927175576Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.928288152Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.110706ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.933944874Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.935029199Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.084865ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.938210564Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.939295129Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.085125ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.945089704Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.970436827Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=25.347114ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.973284088Z level=info msg="Executing migration" id="add annotations_enabled column" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.979678129Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=6.392981ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.983168369Z level=info msg="Executing migration" id="add time_selection_enabled column" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.991738002Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=8.578873ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.998279167Z level=info msg="Executing migration" id="delete orphaned public dashboards" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:30.999004747Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=1.181397ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.003388909Z level=info msg="Executing migration" id="add share column" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.011878361Z level=info msg="Migration successfully executed" id="add share column" duration=8.489472ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.015096988Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.015225419Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=129.471µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.018183942Z level=info msg="Executing migration" id="create file table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.018881312Z level=info msg="Migration successfully executed" id="create file table" duration=697.31µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.02291481Z level=info msg="Executing migration" id="file table idx: path natural pk" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.024598093Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.683253ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.028057343Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.029762928Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.704765ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.03474529Z level=info msg="Executing migration" id="create file_meta table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.03618727Z level=info msg="Migration successfully executed" id="create file_meta table" duration=1.4455ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.039277544Z level=info msg="Executing migration" id="file table idx: path key" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.040434711Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.157237ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.043252492Z level=info msg="Executing migration" id="set path collation in file table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.043316062Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=64.34µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.046402516Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.046467017Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=65.201µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.051091824Z level=info msg="Executing migration" id="managed permissions migration" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.051651722Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=560.188µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.054228308Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.054530913Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=302.515µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.057161331Z level=info msg="Executing migration" id="RBAC action name migrator" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.059034417Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.868536ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.062538687Z level=info msg="Executing migration" id="Add UID column to playlist" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.071205352Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=8.668075ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.074283826Z level=info msg="Executing migration" id="Update uid column values in playlist" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.074477219Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=192.233µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.078120142Z level=info msg="Executing migration" id="Add index for uid in playlist" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.079372399Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.245057ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.083369317Z level=info msg="Executing migration" id="update group index for alert rules" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.084409062Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=1.037196ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.087572247Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.087871322Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=299.015µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.091154758Z level=info msg="Executing migration" id="admin only folder/dashboard permission" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.091937249Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=782.101µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.095190146Z level=info msg="Executing migration" id="add action column to seed_assignment" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.104275857Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=9.102581ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.109241568Z level=info msg="Executing migration" id="add scope column to seed_assignment" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.115815501Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=6.543443ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.119100629Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.120022252Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=921.693µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.125697914Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.200093491Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=74.390987ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.206047886Z level=info msg="Executing migration" id="add unique index builtin_role_name back" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.20699332Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=945.574µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.210198155Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.21191023Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.711285ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.216397484Z level=info msg="Executing migration" id="add primary key to seed_assigment" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.241475074Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=25.07764ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.24537742Z level=info msg="Executing migration" id="add origin column to seed_assignment" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.254886666Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=9.509416ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.261377739Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.262334464Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=952.465µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.267447737Z level=info msg="Executing migration" id="prevent seeding OnCall access" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.267639619Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=191.632µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.271024638Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.27122831Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=203.332µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.275558613Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.275771036Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=207.103µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.279349837Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.279699143Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=349.746µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.285472825Z level=info msg="Executing migration" id="create folder table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.286983807Z level=info msg="Migration successfully executed" id="create folder table" duration=1.510702ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.292521386Z level=info msg="Executing migration" id="Add index for parent_uid" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.29491726Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=2.397364ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.299317484Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.300117626Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=799.962µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.306660029Z level=info msg="Executing migration" id="Update folder title length" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.306685629Z level=info msg="Migration successfully executed" id="Update folder title length" duration=26.62µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.312157108Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.314000045Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.843707ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.320333925Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.321704495Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.37186ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.324971751Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.328222868Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=3.249357ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.332037583Z level=info msg="Executing migration" id="Sync dashboard and folder table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.33253206Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=494.228µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.336656999Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.336949503Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=292.454µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.341438338Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.342565844Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=1.127386ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.345719509Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.346839225Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=1.119576ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.350964464Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.35206642Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=1.096376ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.356020837Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.357145043Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=1.123906ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.363478134Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.365341151Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.860797ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.369672162Z level=info msg="Executing migration" id="create anon_device table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.370633826Z level=info msg="Migration successfully executed" id="create anon_device table" duration=961.504µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.374062825Z level=info msg="Executing migration" id="add unique index anon_device.device_id" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.375185682Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.123047ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.379147119Z level=info msg="Executing migration" id="add index anon_device.updated_at" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.380268514Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.121405ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.385861925Z level=info msg="Executing migration" id="create signing_key table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.38691282Z level=info msg="Migration successfully executed" id="create signing_key table" duration=1.048965ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.390477421Z level=info msg="Executing migration" id="add unique index signing_key.key_id" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.392291507Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.813776ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.395805867Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.397541492Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.735775ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.40153779Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.401900255Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=356.905µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.406276887Z level=info msg="Executing migration" id="Add folder_uid for dashboard" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.415615262Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=9.337935ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.418965349Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.419777832Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=814.112µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.424839124Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.424875045Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=40.381µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.42808743Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.429741304Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=1.653684ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.432657556Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.432684126Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=19.87µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.435634329Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.436883747Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.249338ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.441833367Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.443356789Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=1.522902ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.447201905Z level=info msg="Executing migration" id="create sso_setting table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.449364386Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=2.161101ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.456854823Z level=info msg="Executing migration" id="copy kvstore migration status to each org" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.457748966Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=898.103µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.463891574Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.464242659Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=348.345µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.4685269Z level=info msg="Executing migration" id="managed dashboard permissions annotation actions migration" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.469794299Z level=info msg="Migration successfully executed" id="managed dashboard permissions annotation actions migration" duration=1.267809ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.473163247Z level=info msg="Executing migration" id="create cloud_migration table v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.474601988Z level=info msg="Migration successfully executed" id="create cloud_migration table v1" duration=1.439871ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.478877679Z level=info msg="Executing migration" id="create cloud_migration_run table v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.479877713Z level=info msg="Migration successfully executed" id="create cloud_migration_run table v1" duration=999.974µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.484621151Z level=info msg="Executing migration" id="add stack_id column" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.49429763Z level=info msg="Migration successfully executed" id="add stack_id column" duration=9.675769ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.497548917Z level=info msg="Executing migration" id="add region_slug column" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.504693919Z level=info msg="Migration successfully executed" id="add region_slug column" duration=7.149402ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.507916786Z level=info msg="Executing migration" id="add cluster_slug column" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.51451999Z level=info msg="Migration successfully executed" id="add cluster_slug column" duration=6.603264ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.521241007Z level=info msg="Executing migration" id="add migration uid column" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.530946165Z level=info msg="Migration successfully executed" id="add migration uid column" duration=9.704938ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.563908629Z level=info msg="Executing migration" id="Update uid column values for migration" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.564179432Z level=info msg="Migration successfully executed" id="Update uid column values for migration" duration=287.254µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.571825312Z level=info msg="Executing migration" id="Add unique index migration_uid" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.573796111Z level=info msg="Migration successfully executed" id="Add unique index migration_uid" duration=1.982049ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.578593918Z level=info msg="Executing migration" id="add migration run uid column" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.587975354Z level=info msg="Migration successfully executed" id="add migration run uid column" duration=9.381406ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.590781114Z level=info msg="Executing migration" id="Update uid column values for migration run" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.590944536Z level=info msg="Migration successfully executed" id="Update uid column values for migration run" duration=163.852µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.594902053Z level=info msg="Executing migration" id="Add unique index migration_run_uid" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.596026069Z level=info msg="Migration successfully executed" id="Add unique index migration_run_uid" duration=1.117826ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.605095529Z level=info msg="Executing migration" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.629479889Z level=info msg="Migration successfully executed" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" duration=24.38706ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.632847587Z level=info msg="Executing migration" id="create cloud_migration_session v2" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.633986344Z level=info msg="Migration successfully executed" id="create cloud_migration_session v2" duration=1.141027ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.638881084Z level=info msg="Executing migration" id="create index UQE_cloud_migration_session_uid - v2" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.640110111Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_session_uid - v2" duration=1.228757ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.65189057Z level=info msg="Executing migration" id="copy cloud_migration_session v1 to v2" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.652244886Z level=info msg="Migration successfully executed" id="copy cloud_migration_session v1 to v2" duration=348.535µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.658011568Z level=info msg="Executing migration" id="drop cloud_migration_session_tmp_qwerty" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.65884595Z level=info msg="Migration successfully executed" id="drop cloud_migration_session_tmp_qwerty" duration=835.262µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.663308173Z level=info msg="Executing migration" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.693539648Z level=info msg="Migration successfully executed" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" duration=30.231655ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.699889859Z level=info msg="Executing migration" id="create cloud_migration_snapshot v2" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.700761001Z level=info msg="Migration successfully executed" id="create cloud_migration_snapshot v2" duration=870.972µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.705151324Z level=info msg="Executing migration" id="create index UQE_cloud_migration_snapshot_uid - v2" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.705959945Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_snapshot_uid - v2" duration=808.461µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.709242433Z level=info msg="Executing migration" id="copy cloud_migration_snapshot v1 to v2" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.709501926Z level=info msg="Migration successfully executed" id="copy cloud_migration_snapshot v1 to v2" duration=216.492µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.712241025Z level=info msg="Executing migration" id="drop cloud_migration_snapshot_tmp_qwerty" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.713500503Z level=info msg="Migration successfully executed" id="drop cloud_migration_snapshot_tmp_qwerty" duration=1.251578ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.718217901Z level=info msg="Executing migration" id="add snapshot upload_url column" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.732195262Z level=info msg="Migration successfully executed" id="add snapshot upload_url column" duration=13.978371ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.73555987Z level=info msg="Executing migration" id="add snapshot status column" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.743800258Z level=info msg="Migration successfully executed" id="add snapshot status column" duration=8.239398ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.747549202Z level=info msg="Executing migration" id="add snapshot local_directory column" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.756867586Z level=info msg="Migration successfully executed" id="add snapshot local_directory column" duration=9.312913ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.761300699Z level=info msg="Executing migration" id="add snapshot gms_snapshot_uid column" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.768545803Z level=info msg="Migration successfully executed" id="add snapshot gms_snapshot_uid column" duration=7.244384ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.772218896Z level=info msg="Executing migration" id="add snapshot encryption_key column" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.782725506Z level=info msg="Migration successfully executed" id="add snapshot encryption_key column" duration=10.50623ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.786368059Z level=info msg="Executing migration" id="add snapshot error_string column" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.793108936Z level=info msg="Migration successfully executed" id="add snapshot error_string column" duration=6.740587ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.797400307Z level=info msg="Executing migration" id="create cloud_migration_resource table v1" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.798389541Z level=info msg="Migration successfully executed" id="create cloud_migration_resource table v1" duration=988.934µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.802130445Z level=info msg="Executing migration" id="delete cloud_migration_snapshot.result column" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.840952551Z level=info msg="Migration successfully executed" id="delete cloud_migration_snapshot.result column" duration=38.824126ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.844701116Z level=info msg="Executing migration" id="add cloud_migration_resource.name column" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.851839058Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.name column" duration=7.137752ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.856518384Z level=info msg="Executing migration" id="add cloud_migration_resource.parent_name column" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.866455668Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.parent_name column" duration=9.934884ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.870071679Z level=info msg="Executing migration" id="alter kv_store.value to longtext" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.87012913Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=58.091µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.87363246Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.887219455Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=13.586505ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.930518726Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.943887548Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=13.377892ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.948072658Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.948598606Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=525.557µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.952592243Z level=info msg="Executing migration" id="managed folder permissions alerting silences actions migration" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.952929047Z level=info msg="Migration successfully executed" id="managed folder permissions alerting silences actions migration" duration=343.204µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.956681002Z level=info msg="Executing migration" id="add record column to alert_rule table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.966089306Z level=info msg="Migration successfully executed" id="add record column to alert_rule table" duration=9.407744ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.970612641Z level=info msg="Executing migration" id="add record column to alert_rule_version table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.980110047Z level=info msg="Migration successfully executed" id="add record column to alert_rule_version table" duration=9.496936ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.98376291Z level=info msg="Executing migration" id="add resolved_at column to alert_instance table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.994078598Z level=info msg="Migration successfully executed" id="add resolved_at column to alert_instance table" duration=10.315298ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:31.997664169Z level=info msg="Executing migration" id="add last_sent_at column to alert_instance table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:32.006909472Z level=info msg="Migration successfully executed" id="add last_sent_at column to alert_instance table" duration=9.245583ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:32.011196163Z level=info msg="Executing migration" id="Enable traceQL streaming for all Tempo datasources" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:32.011212063Z level=info msg="Migration successfully executed" id="Enable traceQL streaming for all Tempo datasources" duration=16.28µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:32.014839836Z level=info msg="Executing migration" id="Add scope to alert.notifications.receivers:read and alert.notifications.receivers.secrets:read" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:32.015389963Z level=info msg="Migration successfully executed" id="Add scope to alert.notifications.receivers:read and alert.notifications.receivers.secrets:read" duration=549.537µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:32.019033946Z level=info msg="Executing migration" id="add metadata column to alert_rule table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:32.028565232Z level=info msg="Migration successfully executed" id="add metadata column to alert_rule table" duration=9.537226ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:32.032252554Z level=info msg="Executing migration" id="add metadata column to alert_rule_version table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:32.039331887Z level=info msg="Migration successfully executed" id="add metadata column to alert_rule_version table" duration=7.079023ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:32.04378258Z level=info msg="Executing migration" id="delete orphaned service account permissions" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:32.044097554Z level=info msg="Migration successfully executed" id="delete orphaned service account permissions" duration=314.924µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:32.047474563Z level=info msg="Executing migration" id="adding action set permissions" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:32.048045802Z level=info msg="Migration successfully executed" id="adding action set permissions" duration=571.149µs 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:32.051707514Z level=info msg="Executing migration" id="create user_external_session table" 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:32.05279711Z level=info msg="Migration successfully executed" id="create user_external_session table" duration=1.089026ms 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:32.057166932Z level=info msg="migrations completed" performed=608 skipped=0 duration=4.582304824s 17:06:12 grafana | logger=migrator t=2024-10-31T17:03:32.057923843Z level=info msg="Unlocking database" 17:06:12 grafana | logger=sqlstore t=2024-10-31T17:03:32.076084093Z level=info msg="Created default admin" user=admin 17:06:12 grafana | logger=sqlstore t=2024-10-31T17:03:32.076312086Z level=info msg="Created default organization" 17:06:12 grafana | logger=secrets t=2024-10-31T17:03:32.081184476Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 17:06:12 grafana | logger=plugin.angulardetectorsprovider.dynamic t=2024-10-31T17:03:32.133828302Z level=info msg="Restored cache from database" duration=394.756µs 17:06:12 grafana | logger=plugin.store t=2024-10-31T17:03:32.134787905Z level=info msg="Loading plugins..." 17:06:12 grafana | logger=plugins.registration t=2024-10-31T17:03:32.163311204Z level=error msg="Could not register plugin" pluginId=xychart error="plugin xychart is already registered" 17:06:12 grafana | logger=plugins.initialization t=2024-10-31T17:03:32.163331005Z level=error msg="Could not initialize plugin" pluginId=xychart error="plugin xychart is already registered" 17:06:12 grafana | logger=local.finder t=2024-10-31T17:03:32.163402976Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled 17:06:12 grafana | logger=plugin.store t=2024-10-31T17:03:32.163414316Z level=info msg="Plugins loaded" count=54 duration=28.627361ms 17:06:12 grafana | logger=query_data t=2024-10-31T17:03:32.169641975Z level=info msg="Query Service initialization" 17:06:12 grafana | logger=live.push_http t=2024-10-31T17:03:32.181007758Z level=info msg="Live Push Gateway initialization" 17:06:12 grafana | logger=ngalert.notifier.alertmanager org=1 t=2024-10-31T17:03:32.186644808Z level=info msg="Applying new configuration to Alertmanager" configHash=d2c56faca6af2a5772ff4253222f7386 17:06:12 grafana | logger=ngalert.state.manager t=2024-10-31T17:03:32.194253838Z level=info msg="Running in alternative execution of Error/NoData mode" 17:06:12 grafana | logger=infra.usagestats.collector t=2024-10-31T17:03:32.196933976Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 17:06:12 grafana | logger=provisioning.datasources t=2024-10-31T17:03:32.199746897Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz 17:06:12 grafana | logger=provisioning.alerting t=2024-10-31T17:03:32.218076889Z level=info msg="starting to provision alerting" 17:06:12 grafana | logger=provisioning.alerting t=2024-10-31T17:03:32.218092549Z level=info msg="finished to provision alerting" 17:06:12 grafana | logger=plugin.backgroundinstaller t=2024-10-31T17:03:32.218311032Z level=info msg="Installing plugin" pluginId=grafana-lokiexplore-app version= 17:06:12 grafana | logger=grafanaStorageLogger t=2024-10-31T17:03:32.218740939Z level=info msg="Storage starting" 17:06:12 grafana | logger=ngalert.state.manager t=2024-10-31T17:03:32.219327038Z level=info msg="Warming state cache for startup" 17:06:12 grafana | logger=http.server t=2024-10-31T17:03:32.222855858Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= 17:06:12 grafana | logger=ngalert.multiorg.alertmanager t=2024-10-31T17:03:32.223058321Z level=info msg="Starting MultiOrg Alertmanager" 17:06:12 grafana | logger=ngalert.state.manager t=2024-10-31T17:03:32.263722774Z level=info msg="State cache has been initialized" states=0 duration=44.440656ms 17:06:12 grafana | logger=ngalert.scheduler t=2024-10-31T17:03:32.263756904Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1 17:06:12 grafana | logger=ticker t=2024-10-31T17:03:32.263796864Z level=info msg=starting first_tick=2024-10-31T17:03:40Z 17:06:12 grafana | logger=plugins.update.checker t=2024-10-31T17:03:32.29695088Z level=info msg="Update check succeeded" duration=66.771878ms 17:06:12 grafana | logger=grafana.update.checker t=2024-10-31T17:03:32.297254394Z level=info msg="Update check succeeded" duration=78.769839ms 17:06:12 grafana | logger=provisioning.dashboard t=2024-10-31T17:03:32.315903062Z level=info msg="starting to provision dashboards" 17:06:12 grafana | logger=sqlstore.transactions t=2024-10-31T17:03:32.417407937Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 17:06:12 grafana | logger=sqlstore.transactions t=2024-10-31T17:03:32.429660972Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked" 17:06:12 grafana | logger=sqlstore.transactions t=2024-10-31T17:03:32.506607036Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 17:06:12 grafana | logger=plugin.installer t=2024-10-31T17:03:32.506878899Z level=info msg="Installing plugin" pluginId=grafana-lokiexplore-app version= 17:06:12 grafana | logger=plugin.angulardetectorsprovider.dynamic t=2024-10-31T17:03:32.684102101Z level=info msg="Patterns update finished" duration=308.845738ms 17:06:12 grafana | logger=provisioning.dashboard t=2024-10-31T17:03:32.71064169Z level=info msg="finished to provision dashboards" 17:06:12 grafana | logger=installer.fs t=2024-10-31T17:03:32.710918795Z level=info msg="Downloaded and extracted grafana-lokiexplore-app v1.0.2 zip successfully to /var/lib/grafana/plugins/grafana-lokiexplore-app" 17:06:12 grafana | logger=plugins.registration t=2024-10-31T17:03:32.739491725Z level=info msg="Plugin registered" pluginId=grafana-lokiexplore-app 17:06:12 grafana | logger=plugin.backgroundinstaller t=2024-10-31T17:03:32.739513415Z level=info msg="Plugin successfully installed" pluginId=grafana-lokiexplore-app version= duration=521.189583ms 17:06:12 grafana | logger=grafana-apiserver t=2024-10-31T17:03:32.81523863Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" 17:06:12 grafana | logger=grafana-apiserver t=2024-10-31T17:03:32.815718307Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" 17:06:12 grafana | logger=grafana-apiserver t=2024-10-31T17:03:32.817030855Z level=info msg="Adding GroupVersion iam.grafana.app v0alpha1 to ResourceManager" 17:06:12 grafana | logger=infra.usagestats t=2024-10-31T17:05:08.2286946Z level=info msg="Usage stats are ready to report" 17:06:12 =================================== 17:06:12 ======== Logs from kafka ======== 17:06:12 kafka | ===> User 17:06:12 kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 17:06:12 kafka | ===> Configuring ... 17:06:12 kafka | Running in Zookeeper mode... 17:06:12 kafka | ===> Running preflight checks ... 17:06:12 kafka | ===> Check if /var/lib/kafka/data is writable ... 17:06:12 kafka | ===> Check if Zookeeper is healthy ... 17:06:12 kafka | [2024-10-31 17:03:35,296] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) 17:06:12 kafka | [2024-10-31 17:03:35,297] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) 17:06:12 kafka | [2024-10-31 17:03:35,297] INFO Client environment:java.version=17.0.12 (org.apache.zookeeper.ZooKeeper) 17:06:12 kafka | [2024-10-31 17:03:35,297] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 17:06:12 kafka | [2024-10-31 17:03:35,297] INFO Client environment:java.home=/usr/lib/jvm/java-17-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 17:06:12 kafka | [2024-10-31 17:03:35,297] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/kafka-raft-7.7.1-ccs.jar:/usr/share/java/cp-base-new/commons-validator-1.7.jar:/usr/share/java/cp-base-new/utility-belt-7.7.1-30.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/common-utils-7.7.1.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/jackson-core-2.16.0.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/kafka_2.13-7.7.1-ccs.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.16.0.jar:/usr/share/java/cp-base-new/kafka-storage-7.7.1-ccs.jar:/usr/share/java/cp-base-new/commons-io-2.11.0.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-databind-2.16.0.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/kafka-server-7.7.1-ccs.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.4.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.16.0.jar:/usr/share/java/cp-base-new/kafka-clients-7.7.1-ccs.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/commons-beanutils-1.9.4.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.16.0.jar:/usr/share/java/cp-base-new/commons-digester-2.1.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/kafka-server-common-7.7.1-ccs.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.16.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/jackson-annotations-2.16.0.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.6-4.jar:/usr/share/java/cp-base-new/zookeeper-3.8.4.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.7.1-ccs.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.7.1-ccs.jar:/usr/share/java/cp-base-new/error_prone_annotations-2.10.0.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.7.1-ccs.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/checker-qual-3.19.0.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.7.1.jar:/usr/share/java/cp-base-new/pcollections-4.0.1.jar:/usr/share/java/cp-base-new/commons-logging-1.2.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/commons-collections-3.2.2.jar:/usr/share/java/cp-base-new/caffeine-2.9.3.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar:/usr/share/java/cp-base-new/jose4j-0.9.5.jar:/usr/share/java/cp-base-new/kafka-metadata-7.7.1-ccs.jar (org.apache.zookeeper.ZooKeeper) 17:06:12 kafka | [2024-10-31 17:03:35,298] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 17:06:12 kafka | [2024-10-31 17:03:35,298] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 17:06:12 kafka | [2024-10-31 17:03:35,298] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 17:06:12 kafka | [2024-10-31 17:03:35,298] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 17:06:12 kafka | [2024-10-31 17:03:35,298] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 17:06:12 kafka | [2024-10-31 17:03:35,298] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 17:06:12 kafka | [2024-10-31 17:03:35,298] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 17:06:12 kafka | [2024-10-31 17:03:35,298] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 17:06:12 kafka | [2024-10-31 17:03:35,298] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 17:06:12 kafka | [2024-10-31 17:03:35,298] INFO Client environment:os.memory.free=500MB (org.apache.zookeeper.ZooKeeper) 17:06:12 kafka | [2024-10-31 17:03:35,298] INFO Client environment:os.memory.max=8044MB (org.apache.zookeeper.ZooKeeper) 17:06:12 kafka | [2024-10-31 17:03:35,298] INFO Client environment:os.memory.total=512MB (org.apache.zookeeper.ZooKeeper) 17:06:12 kafka | [2024-10-31 17:03:35,301] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@43a25848 (org.apache.zookeeper.ZooKeeper) 17:06:12 kafka | [2024-10-31 17:03:35,304] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 17:06:12 kafka | [2024-10-31 17:03:35,308] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) 17:06:12 kafka | [2024-10-31 17:03:35,314] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 17:06:12 kafka | [2024-10-31 17:03:35,322] INFO Opening socket connection to server zookeeper/172.17.0.5:2181. (org.apache.zookeeper.ClientCnxn) 17:06:12 kafka | [2024-10-31 17:03:35,322] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) 17:06:12 kafka | [2024-10-31 17:03:35,331] INFO Socket connection established, initiating session, client: /172.17.0.9:54172, server: zookeeper/172.17.0.5:2181 (org.apache.zookeeper.ClientCnxn) 17:06:12 kafka | [2024-10-31 17:03:35,358] INFO Session establishment complete on server zookeeper/172.17.0.5:2181, session id = 0x100000265970000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) 17:06:12 kafka | [2024-10-31 17:03:35,474] INFO Session: 0x100000265970000 closed (org.apache.zookeeper.ZooKeeper) 17:06:12 kafka | [2024-10-31 17:03:35,474] INFO EventThread shut down for session: 0x100000265970000 (org.apache.zookeeper.ClientCnxn) 17:06:12 kafka | Using log4j config /etc/kafka/log4j.properties 17:06:12 kafka | ===> Launching ... 17:06:12 kafka | ===> Launching kafka ... 17:06:12 kafka | [2024-10-31 17:03:35,972] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) 17:06:12 kafka | [2024-10-31 17:03:36,170] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 17:06:12 kafka | [2024-10-31 17:03:36,239] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) 17:06:12 kafka | [2024-10-31 17:03:36,240] INFO starting (kafka.server.KafkaServer) 17:06:12 kafka | [2024-10-31 17:03:36,241] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) 17:06:12 kafka | [2024-10-31 17:03:36,251] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) 17:06:12 kafka | [2024-10-31 17:03:36,254] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) 17:06:12 kafka | [2024-10-31 17:03:36,254] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) 17:06:12 kafka | [2024-10-31 17:03:36,254] INFO Client environment:java.version=17.0.12 (org.apache.zookeeper.ZooKeeper) 17:06:12 kafka | [2024-10-31 17:03:36,254] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 17:06:12 kafka | [2024-10-31 17:03:36,254] INFO Client environment:java.home=/usr/lib/jvm/java-17-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 17:06:12 kafka | [2024-10-31 17:03:36,254] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/connect-transforms-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/protobuf-java-3.23.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-raft-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/connect-runtime-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/connect-json-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/netty-common-4.1.110.Final.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.16.2.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.110.Final.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-server-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/connect-mirror-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.16.2.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.16.2.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.110.Final.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-clients-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.110.Final.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/scala-library-2.13.12.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.110.Final.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-streams-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.110.Final.jar:/usr/bin/../share/java/kafka/kafka-shell-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.110.Final.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.110.Final.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/connect-api-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.110.Final.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.6-4.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.12.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jackson-databind-2.16.2.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/opentelemetry-proto-1.0.0-alpha.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.16.2.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/trogdor-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-tools-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/jackson-core-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.7.1-ccs.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) 17:06:12 kafka | [2024-10-31 17:03:36,254] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 17:06:12 kafka | [2024-10-31 17:03:36,254] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 17:06:12 kafka | [2024-10-31 17:03:36,254] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 17:06:12 kafka | [2024-10-31 17:03:36,254] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 17:06:12 kafka | [2024-10-31 17:03:36,254] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 17:06:12 kafka | [2024-10-31 17:03:36,254] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 17:06:12 kafka | [2024-10-31 17:03:36,254] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 17:06:12 kafka | [2024-10-31 17:03:36,254] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 17:06:12 kafka | [2024-10-31 17:03:36,254] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 17:06:12 kafka | [2024-10-31 17:03:36,254] INFO Client environment:os.memory.free=986MB (org.apache.zookeeper.ZooKeeper) 17:06:12 kafka | [2024-10-31 17:03:36,254] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) 17:06:12 kafka | [2024-10-31 17:03:36,254] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) 17:06:12 kafka | [2024-10-31 17:03:36,256] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@609bcfb6 (org.apache.zookeeper.ZooKeeper) 17:06:12 kafka | [2024-10-31 17:03:36,259] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) 17:06:12 kafka | [2024-10-31 17:03:36,263] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 17:06:12 kafka | [2024-10-31 17:03:36,264] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) 17:06:12 kafka | [2024-10-31 17:03:36,266] INFO Opening socket connection to server zookeeper/172.17.0.5:2181. (org.apache.zookeeper.ClientCnxn) 17:06:12 kafka | [2024-10-31 17:03:36,269] INFO Socket connection established, initiating session, client: /172.17.0.9:54174, server: zookeeper/172.17.0.5:2181 (org.apache.zookeeper.ClientCnxn) 17:06:12 kafka | [2024-10-31 17:03:36,276] INFO Session establishment complete on server zookeeper/172.17.0.5:2181, session id = 0x100000265970001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) 17:06:12 kafka | [2024-10-31 17:03:36,279] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) 17:06:12 kafka | [2024-10-31 17:03:36,590] INFO Cluster ID = TF51AfRARcuaw1457m-14A (kafka.server.KafkaServer) 17:06:12 kafka | [2024-10-31 17:03:36,635] INFO KafkaConfig values: 17:06:12 kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 17:06:12 kafka | alter.config.policy.class.name = null 17:06:12 kafka | alter.log.dirs.replication.quota.window.num = 11 17:06:12 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 17:06:12 kafka | authorizer.class.name = 17:06:12 kafka | auto.create.topics.enable = true 17:06:12 kafka | auto.include.jmx.reporter = true 17:06:12 kafka | auto.leader.rebalance.enable = true 17:06:12 kafka | background.threads = 10 17:06:12 kafka | broker.heartbeat.interval.ms = 2000 17:06:12 kafka | broker.id = 1 17:06:12 kafka | broker.id.generation.enable = true 17:06:12 kafka | broker.rack = null 17:06:12 kafka | broker.session.timeout.ms = 9000 17:06:12 kafka | client.quota.callback.class = null 17:06:12 kafka | compression.type = producer 17:06:12 kafka | connection.failed.authentication.delay.ms = 100 17:06:12 kafka | connections.max.idle.ms = 600000 17:06:12 kafka | connections.max.reauth.ms = 0 17:06:12 kafka | control.plane.listener.name = null 17:06:12 kafka | controlled.shutdown.enable = true 17:06:12 kafka | controlled.shutdown.max.retries = 3 17:06:12 kafka | controlled.shutdown.retry.backoff.ms = 5000 17:06:12 kafka | controller.listener.names = null 17:06:12 kafka | controller.quorum.append.linger.ms = 25 17:06:12 kafka | controller.quorum.election.backoff.max.ms = 1000 17:06:12 kafka | controller.quorum.election.timeout.ms = 1000 17:06:12 kafka | controller.quorum.fetch.timeout.ms = 2000 17:06:12 kafka | controller.quorum.request.timeout.ms = 2000 17:06:12 kafka | controller.quorum.retry.backoff.ms = 20 17:06:12 kafka | controller.quorum.voters = [] 17:06:12 kafka | controller.quota.window.num = 11 17:06:12 kafka | controller.quota.window.size.seconds = 1 17:06:12 kafka | controller.socket.timeout.ms = 30000 17:06:12 kafka | create.topic.policy.class.name = null 17:06:12 kafka | default.replication.factor = 1 17:06:12 kafka | delegation.token.expiry.check.interval.ms = 3600000 17:06:12 kafka | delegation.token.expiry.time.ms = 86400000 17:06:12 kafka | delegation.token.master.key = null 17:06:12 kafka | delegation.token.max.lifetime.ms = 604800000 17:06:12 kafka | delegation.token.secret.key = null 17:06:12 kafka | delete.records.purgatory.purge.interval.requests = 1 17:06:12 kafka | delete.topic.enable = true 17:06:12 kafka | early.start.listeners = null 17:06:12 kafka | eligible.leader.replicas.enable = false 17:06:12 kafka | fetch.max.bytes = 57671680 17:06:12 kafka | fetch.purgatory.purge.interval.requests = 1000 17:06:12 kafka | group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.UniformAssignor, org.apache.kafka.coordinator.group.assignor.RangeAssignor] 17:06:12 kafka | group.consumer.heartbeat.interval.ms = 5000 17:06:12 kafka | group.consumer.max.heartbeat.interval.ms = 15000 17:06:12 kafka | group.consumer.max.session.timeout.ms = 60000 17:06:12 kafka | group.consumer.max.size = 2147483647 17:06:12 kafka | group.consumer.min.heartbeat.interval.ms = 5000 17:06:12 kafka | group.consumer.min.session.timeout.ms = 45000 17:06:12 kafka | group.consumer.session.timeout.ms = 45000 17:06:12 kafka | group.coordinator.new.enable = false 17:06:12 kafka | group.coordinator.rebalance.protocols = [classic] 17:06:12 kafka | group.coordinator.threads = 1 17:06:12 kafka | group.initial.rebalance.delay.ms = 3000 17:06:12 kafka | group.max.session.timeout.ms = 1800000 17:06:12 kafka | group.max.size = 2147483647 17:06:12 kafka | group.min.session.timeout.ms = 6000 17:06:12 kafka | initial.broker.registration.timeout.ms = 60000 17:06:12 kafka | inter.broker.listener.name = PLAINTEXT 17:06:12 kafka | inter.broker.protocol.version = 3.7-IV4 17:06:12 kafka | kafka.metrics.polling.interval.secs = 10 17:06:12 kafka | kafka.metrics.reporters = [] 17:06:12 kafka | leader.imbalance.check.interval.seconds = 300 17:06:12 kafka | leader.imbalance.per.broker.percentage = 10 17:06:12 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT 17:06:12 kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 17:06:12 kafka | log.cleaner.backoff.ms = 15000 17:06:12 kafka | log.cleaner.dedupe.buffer.size = 134217728 17:06:12 kafka | log.cleaner.delete.retention.ms = 86400000 17:06:12 kafka | log.cleaner.enable = true 17:06:12 kafka | log.cleaner.io.buffer.load.factor = 0.9 17:06:12 kafka | log.cleaner.io.buffer.size = 524288 17:06:12 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 17:06:12 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 17:06:12 kafka | log.cleaner.min.cleanable.ratio = 0.5 17:06:12 kafka | log.cleaner.min.compaction.lag.ms = 0 17:06:12 kafka | log.cleaner.threads = 1 17:06:12 kafka | log.cleanup.policy = [delete] 17:06:12 kafka | log.dir = /tmp/kafka-logs 17:06:12 kafka | log.dirs = /var/lib/kafka/data 17:06:12 kafka | log.flush.interval.messages = 9223372036854775807 17:06:12 kafka | log.flush.interval.ms = null 17:06:12 kafka | log.flush.offset.checkpoint.interval.ms = 60000 17:06:12 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 17:06:12 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 17:06:12 kafka | log.index.interval.bytes = 4096 17:06:12 kafka | log.index.size.max.bytes = 10485760 17:06:12 kafka | log.local.retention.bytes = -2 17:06:12 kafka | log.local.retention.ms = -2 17:06:12 kafka | log.message.downconversion.enable = true 17:06:12 kafka | log.message.format.version = 3.0-IV1 17:06:12 kafka | log.message.timestamp.after.max.ms = 9223372036854775807 17:06:12 kafka | log.message.timestamp.before.max.ms = 9223372036854775807 17:06:12 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 17:06:12 kafka | log.message.timestamp.type = CreateTime 17:06:12 kafka | log.preallocate = false 17:06:12 kafka | log.retention.bytes = -1 17:06:12 kafka | log.retention.check.interval.ms = 300000 17:06:12 kafka | log.retention.hours = 168 17:06:12 kafka | log.retention.minutes = null 17:06:12 kafka | log.retention.ms = null 17:06:12 kafka | log.roll.hours = 168 17:06:12 kafka | log.roll.jitter.hours = 0 17:06:12 kafka | log.roll.jitter.ms = null 17:06:12 kafka | log.roll.ms = null 17:06:12 kafka | log.segment.bytes = 1073741824 17:06:12 kafka | log.segment.delete.delay.ms = 60000 17:06:12 kafka | max.connection.creation.rate = 2147483647 17:06:12 kafka | max.connections = 2147483647 17:06:12 kafka | max.connections.per.ip = 2147483647 17:06:12 kafka | max.connections.per.ip.overrides = 17:06:12 kafka | max.incremental.fetch.session.cache.slots = 1000 17:06:12 kafka | message.max.bytes = 1048588 17:06:12 kafka | metadata.log.dir = null 17:06:12 kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 17:06:12 kafka | metadata.log.max.snapshot.interval.ms = 3600000 17:06:12 kafka | metadata.log.segment.bytes = 1073741824 17:06:12 kafka | metadata.log.segment.min.bytes = 8388608 17:06:12 kafka | metadata.log.segment.ms = 604800000 17:06:12 kafka | metadata.max.idle.interval.ms = 500 17:06:12 kafka | metadata.max.retention.bytes = 104857600 17:06:12 kafka | metadata.max.retention.ms = 604800000 17:06:12 kafka | metric.reporters = [] 17:06:12 kafka | metrics.num.samples = 2 17:06:12 kafka | metrics.recording.level = INFO 17:06:12 kafka | metrics.sample.window.ms = 30000 17:06:12 kafka | min.insync.replicas = 1 17:06:12 kafka | node.id = 1 17:06:12 kafka | num.io.threads = 8 17:06:12 kafka | num.network.threads = 3 17:06:12 kafka | num.partitions = 1 17:06:12 kafka | num.recovery.threads.per.data.dir = 1 17:06:12 kafka | num.replica.alter.log.dirs.threads = null 17:06:12 kafka | num.replica.fetchers = 1 17:06:12 kafka | offset.metadata.max.bytes = 4096 17:06:12 kafka | offsets.commit.required.acks = -1 17:06:12 kafka | offsets.commit.timeout.ms = 5000 17:06:12 kafka | offsets.load.buffer.size = 5242880 17:06:12 kafka | offsets.retention.check.interval.ms = 600000 17:06:12 kafka | offsets.retention.minutes = 10080 17:06:12 kafka | offsets.topic.compression.codec = 0 17:06:12 kafka | offsets.topic.num.partitions = 50 17:06:12 kafka | offsets.topic.replication.factor = 1 17:06:12 kafka | offsets.topic.segment.bytes = 104857600 17:06:12 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding 17:06:12 kafka | password.encoder.iterations = 4096 17:06:12 kafka | password.encoder.key.length = 128 17:06:12 kafka | password.encoder.keyfactory.algorithm = null 17:06:12 kafka | password.encoder.old.secret = null 17:06:12 kafka | password.encoder.secret = null 17:06:12 kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder 17:06:12 kafka | process.roles = [] 17:06:12 kafka | producer.id.expiration.check.interval.ms = 600000 17:06:12 kafka | producer.id.expiration.ms = 86400000 17:06:12 kafka | producer.purgatory.purge.interval.requests = 1000 17:06:12 kafka | queued.max.request.bytes = -1 17:06:12 kafka | queued.max.requests = 500 17:06:12 kafka | quota.window.num = 11 17:06:12 kafka | quota.window.size.seconds = 1 17:06:12 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 17:06:12 kafka | remote.log.manager.task.interval.ms = 30000 17:06:12 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 17:06:12 kafka | remote.log.manager.task.retry.backoff.ms = 500 17:06:12 kafka | remote.log.manager.task.retry.jitter = 0.2 17:06:12 kafka | remote.log.manager.thread.pool.size = 10 17:06:12 kafka | remote.log.metadata.custom.metadata.max.bytes = 128 17:06:12 kafka | remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager 17:06:12 kafka | remote.log.metadata.manager.class.path = null 17:06:12 kafka | remote.log.metadata.manager.impl.prefix = rlmm.config. 17:06:12 kafka | remote.log.metadata.manager.listener.name = null 17:06:12 kafka | remote.log.reader.max.pending.tasks = 100 17:06:12 kafka | remote.log.reader.threads = 10 17:06:12 kafka | remote.log.storage.manager.class.name = null 17:06:12 kafka | remote.log.storage.manager.class.path = null 17:06:12 kafka | remote.log.storage.manager.impl.prefix = rsm.config. 17:06:12 kafka | remote.log.storage.system.enable = false 17:06:12 kafka | replica.fetch.backoff.ms = 1000 17:06:12 kafka | replica.fetch.max.bytes = 1048576 17:06:12 kafka | replica.fetch.min.bytes = 1 17:06:12 kafka | replica.fetch.response.max.bytes = 10485760 17:06:12 kafka | replica.fetch.wait.max.ms = 500 17:06:12 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 17:06:12 kafka | replica.lag.time.max.ms = 30000 17:06:12 kafka | replica.selector.class = null 17:06:12 kafka | replica.socket.receive.buffer.bytes = 65536 17:06:12 kafka | replica.socket.timeout.ms = 30000 17:06:12 kafka | replication.quota.window.num = 11 17:06:12 kafka | replication.quota.window.size.seconds = 1 17:06:12 kafka | request.timeout.ms = 30000 17:06:12 kafka | reserved.broker.max.id = 1000 17:06:12 kafka | sasl.client.callback.handler.class = null 17:06:12 kafka | sasl.enabled.mechanisms = [GSSAPI] 17:06:12 kafka | sasl.jaas.config = null 17:06:12 kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit 17:06:12 kafka | sasl.kerberos.min.time.before.relogin = 60000 17:06:12 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] 17:06:12 kafka | sasl.kerberos.service.name = null 17:06:12 kafka | sasl.kerberos.ticket.renew.jitter = 0.05 17:06:12 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 17:06:12 kafka | sasl.login.callback.handler.class = null 17:06:12 kafka | sasl.login.class = null 17:06:12 kafka | sasl.login.connect.timeout.ms = null 17:06:12 kafka | sasl.login.read.timeout.ms = null 17:06:12 kafka | sasl.login.refresh.buffer.seconds = 300 17:06:12 kafka | sasl.login.refresh.min.period.seconds = 60 17:06:12 kafka | sasl.login.refresh.window.factor = 0.8 17:06:12 kafka | sasl.login.refresh.window.jitter = 0.05 17:06:12 kafka | sasl.login.retry.backoff.max.ms = 10000 17:06:12 kafka | sasl.login.retry.backoff.ms = 100 17:06:12 kafka | sasl.mechanism.controller.protocol = GSSAPI 17:06:12 kafka | sasl.mechanism.inter.broker.protocol = GSSAPI 17:06:12 kafka | sasl.oauthbearer.clock.skew.seconds = 30 17:06:12 kafka | sasl.oauthbearer.expected.audience = null 17:06:12 kafka | sasl.oauthbearer.expected.issuer = null 17:06:12 kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 17:06:12 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 17:06:12 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 17:06:12 kafka | sasl.oauthbearer.jwks.endpoint.url = null 17:06:12 kafka | sasl.oauthbearer.scope.claim.name = scope 17:06:12 kafka | sasl.oauthbearer.sub.claim.name = sub 17:06:12 kafka | sasl.oauthbearer.token.endpoint.url = null 17:06:12 kafka | sasl.server.callback.handler.class = null 17:06:12 kafka | sasl.server.max.receive.size = 524288 17:06:12 kafka | security.inter.broker.protocol = PLAINTEXT 17:06:12 kafka | security.providers = null 17:06:12 kafka | server.max.startup.time.ms = 9223372036854775807 17:06:12 kafka | socket.connection.setup.timeout.max.ms = 30000 17:06:12 kafka | socket.connection.setup.timeout.ms = 10000 17:06:12 kafka | socket.listen.backlog.size = 50 17:06:12 kafka | socket.receive.buffer.bytes = 102400 17:06:12 kafka | socket.request.max.bytes = 104857600 17:06:12 kafka | socket.send.buffer.bytes = 102400 17:06:12 kafka | ssl.allow.dn.changes = false 17:06:12 kafka | ssl.allow.san.changes = false 17:06:12 kafka | ssl.cipher.suites = [] 17:06:12 kafka | ssl.client.auth = none 17:06:12 kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 17:06:12 kafka | ssl.endpoint.identification.algorithm = https 17:06:12 kafka | ssl.engine.factory.class = null 17:06:12 kafka | ssl.key.password = null 17:06:12 kafka | ssl.keymanager.algorithm = SunX509 17:06:12 kafka | ssl.keystore.certificate.chain = null 17:06:12 kafka | ssl.keystore.key = null 17:06:12 kafka | ssl.keystore.location = null 17:06:12 kafka | ssl.keystore.password = null 17:06:12 kafka | ssl.keystore.type = JKS 17:06:12 kafka | ssl.principal.mapping.rules = DEFAULT 17:06:12 kafka | ssl.protocol = TLSv1.3 17:06:12 kafka | ssl.provider = null 17:06:12 kafka | ssl.secure.random.implementation = null 17:06:12 kafka | ssl.trustmanager.algorithm = PKIX 17:06:12 kafka | ssl.truststore.certificates = null 17:06:12 kafka | ssl.truststore.location = null 17:06:12 kafka | ssl.truststore.password = null 17:06:12 kafka | ssl.truststore.type = JKS 17:06:12 kafka | telemetry.max.bytes = 1048576 17:06:12 kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 17:06:12 kafka | transaction.max.timeout.ms = 900000 17:06:12 kafka | transaction.partition.verification.enable = true 17:06:12 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 17:06:12 kafka | transaction.state.log.load.buffer.size = 5242880 17:06:12 kafka | transaction.state.log.min.isr = 2 17:06:12 kafka | transaction.state.log.num.partitions = 50 17:06:12 kafka | transaction.state.log.replication.factor = 3 17:06:12 kafka | transaction.state.log.segment.bytes = 104857600 17:06:12 kafka | transactional.id.expiration.ms = 604800000 17:06:12 kafka | unclean.leader.election.enable = false 17:06:12 kafka | unstable.api.versions.enable = false 17:06:12 kafka | unstable.metadata.versions.enable = false 17:06:12 kafka | zookeeper.clientCnxnSocket = null 17:06:12 kafka | zookeeper.connect = zookeeper:2181 17:06:12 kafka | zookeeper.connection.timeout.ms = null 17:06:12 kafka | zookeeper.max.in.flight.requests = 10 17:06:12 kafka | zookeeper.metadata.migration.enable = false 17:06:12 kafka | zookeeper.metadata.migration.min.batch.size = 200 17:06:12 kafka | zookeeper.session.timeout.ms = 18000 17:06:12 kafka | zookeeper.set.acl = false 17:06:12 kafka | zookeeper.ssl.cipher.suites = null 17:06:12 kafka | zookeeper.ssl.client.enable = false 17:06:12 kafka | zookeeper.ssl.crl.enable = false 17:06:12 kafka | zookeeper.ssl.enabled.protocols = null 17:06:12 kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS 17:06:12 kafka | zookeeper.ssl.keystore.location = null 17:06:12 kafka | zookeeper.ssl.keystore.password = null 17:06:12 kafka | zookeeper.ssl.keystore.type = null 17:06:12 kafka | zookeeper.ssl.ocsp.enable = false 17:06:12 kafka | zookeeper.ssl.protocol = TLSv1.2 17:06:12 kafka | zookeeper.ssl.truststore.location = null 17:06:12 kafka | zookeeper.ssl.truststore.password = null 17:06:12 kafka | zookeeper.ssl.truststore.type = null 17:06:12 kafka | (kafka.server.KafkaConfig) 17:06:12 kafka | [2024-10-31 17:03:36,666] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 17:06:12 kafka | [2024-10-31 17:03:36,666] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 17:06:12 kafka | [2024-10-31 17:03:36,667] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 17:06:12 kafka | [2024-10-31 17:03:36,669] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 17:06:12 kafka | [2024-10-31 17:03:36,673] INFO [KafkaServer id=1] Rewriting /var/lib/kafka/data/meta.properties (kafka.server.KafkaServer) 17:06:12 kafka | [2024-10-31 17:03:36,725] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) 17:06:12 kafka | [2024-10-31 17:03:36,733] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) 17:06:12 kafka | [2024-10-31 17:03:36,742] INFO Loaded 0 logs in 16ms (kafka.log.LogManager) 17:06:12 kafka | [2024-10-31 17:03:36,744] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) 17:06:12 kafka | [2024-10-31 17:03:36,745] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) 17:06:12 kafka | [2024-10-31 17:03:36,756] INFO Starting the log cleaner (kafka.log.LogCleaner) 17:06:12 kafka | [2024-10-31 17:03:36,800] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) 17:06:12 kafka | [2024-10-31 17:03:36,812] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) 17:06:12 kafka | [2024-10-31 17:03:36,822] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) 17:06:12 kafka | [2024-10-31 17:03:36,845] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.NodeToControllerRequestThread) 17:06:12 kafka | [2024-10-31 17:03:37,100] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 17:06:12 kafka | [2024-10-31 17:03:37,115] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) 17:06:12 kafka | [2024-10-31 17:03:37,115] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 17:06:12 kafka | [2024-10-31 17:03:37,119] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) 17:06:12 kafka | [2024-10-31 17:03:37,122] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.NodeToControllerRequestThread) 17:06:12 kafka | [2024-10-31 17:03:37,141] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 17:06:12 kafka | [2024-10-31 17:03:37,146] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 17:06:12 kafka | [2024-10-31 17:03:37,148] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 17:06:12 kafka | [2024-10-31 17:03:37,150] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 17:06:12 kafka | [2024-10-31 17:03:37,151] INFO [ExpirationReaper-1-RemoteFetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 17:06:12 kafka | [2024-10-31 17:03:37,163] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) 17:06:12 kafka | [2024-10-31 17:03:37,164] INFO [AddPartitionsToTxnSenderThread-1]: Starting (kafka.server.AddPartitionsToTxnManager) 17:06:12 kafka | [2024-10-31 17:03:37,186] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) 17:06:12 kafka | [2024-10-31 17:03:37,211] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1730394217199,1730394217199,1,0,0,72057604332257281,258,0,27 17:06:12 kafka | (kafka.zk.KafkaZkClient) 17:06:12 kafka | [2024-10-31 17:03:37,212] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) 17:06:12 kafka | [2024-10-31 17:03:37,243] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) 17:06:12 kafka | [2024-10-31 17:03:37,248] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 17:06:12 kafka | [2024-10-31 17:03:37,253] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 17:06:12 kafka | [2024-10-31 17:03:37,254] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 17:06:12 kafka | [2024-10-31 17:03:37,265] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) 17:06:12 kafka | [2024-10-31 17:03:37,265] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) 17:06:12 kafka | [2024-10-31 17:03:37,269] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) 17:06:12 kafka | [2024-10-31 17:03:37,275] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) 17:06:12 kafka | [2024-10-31 17:03:37,279] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) 17:06:12 kafka | [2024-10-31 17:03:37,280] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) 17:06:12 kafka | [2024-10-31 17:03:37,283] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) 17:06:12 kafka | [2024-10-31 17:03:37,285] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) 17:06:12 kafka | [2024-10-31 17:03:37,285] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) 17:06:12 kafka | [2024-10-31 17:03:37,315] INFO [MetadataCache brokerId=1] Updated cache from existing None to latest Features(version=3.7-IV4, finalizedFeatures={}, finalizedFeaturesEpoch=0). (kafka.server.metadata.ZkMetadataCache) 17:06:12 kafka | [2024-10-31 17:03:37,315] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) 17:06:12 kafka | [2024-10-31 17:03:37,319] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) 17:06:12 kafka | [2024-10-31 17:03:37,324] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) 17:06:12 kafka | [2024-10-31 17:03:37,324] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 17:06:12 kafka | [2024-10-31 17:03:37,329] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) 17:06:12 kafka | [2024-10-31 17:03:37,345] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) 17:06:12 kafka | [2024-10-31 17:03:37,345] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) 17:06:12 kafka | [2024-10-31 17:03:37,350] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) 17:06:12 kafka | [2024-10-31 17:03:37,354] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) 17:06:12 kafka | [2024-10-31 17:03:37,356] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) 17:06:12 kafka | [2024-10-31 17:03:37,358] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) 17:06:12 kafka | [2024-10-31 17:03:37,359] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) 17:06:12 kafka | [2024-10-31 17:03:37,364] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) 17:06:12 kafka | [2024-10-31 17:03:37,365] INFO Kafka version: 7.7.1-ccs (org.apache.kafka.common.utils.AppInfoParser) 17:06:12 kafka | [2024-10-31 17:03:37,365] INFO Kafka commitId: 91d86f33092378c89731b4a9cf1ce5db831a2b07 (org.apache.kafka.common.utils.AppInfoParser) 17:06:12 kafka | [2024-10-31 17:03:37,365] INFO Kafka startTimeMs: 1730394217362 (org.apache.kafka.common.utils.AppInfoParser) 17:06:12 kafka | [2024-10-31 17:03:37,366] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) 17:06:12 kafka | [2024-10-31 17:03:37,366] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) 17:06:12 kafka | [2024-10-31 17:03:37,366] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) 17:06:12 kafka | [2024-10-31 17:03:37,367] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) 17:06:12 kafka | [2024-10-31 17:03:37,368] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) 17:06:12 kafka | [2024-10-31 17:03:37,374] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) 17:06:12 kafka | [2024-10-31 17:03:37,374] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) 17:06:12 kafka | [2024-10-31 17:03:37,374] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) 17:06:12 kafka | [2024-10-31 17:03:37,375] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) 17:06:12 kafka | [2024-10-31 17:03:37,376] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) 17:06:12 kafka | [2024-10-31 17:03:37,379] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) 17:06:12 kafka | [2024-10-31 17:03:37,384] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) 17:06:12 kafka | [2024-10-31 17:03:37,384] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) 17:06:12 kafka | [2024-10-31 17:03:37,386] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) 17:06:12 kafka | [2024-10-31 17:03:37,386] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) 17:06:12 kafka | [2024-10-31 17:03:37,387] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) 17:06:12 kafka | [2024-10-31 17:03:37,387] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) 17:06:12 kafka | [2024-10-31 17:03:37,389] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) 17:06:12 kafka | [2024-10-31 17:03:37,389] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) 17:06:12 kafka | [2024-10-31 17:03:37,394] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) 17:06:12 kafka | [2024-10-31 17:03:37,394] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) 17:06:12 kafka | [2024-10-31 17:03:37,394] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) 17:06:12 kafka | [2024-10-31 17:03:37,394] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) 17:06:12 kafka | [2024-10-31 17:03:37,395] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) 17:06:12 kafka | [2024-10-31 17:03:37,396] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) 17:06:12 kafka | [2024-10-31 17:03:37,405] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) 17:06:12 kafka | [2024-10-31 17:03:37,454] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new ZK controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.NodeToControllerRequestThread) 17:06:12 kafka | [2024-10-31 17:03:37,469] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 17:06:12 kafka | [2024-10-31 17:03:37,524] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new ZK controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.NodeToControllerRequestThread) 17:06:12 kafka | [2024-10-31 17:03:42,406] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) 17:06:12 kafka | [2024-10-31 17:03:42,407] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) 17:06:12 kafka | [2024-10-31 17:04:06,105] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 17:06:12 kafka | [2024-10-31 17:04:06,106] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 17:06:12 kafka | [2024-10-31 17:04:06,112] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) 17:06:12 kafka | [2024-10-31 17:04:06,121] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) 17:06:12 kafka | [2024-10-31 17:04:06,148] INFO [Controller id=1] New topics: [Set(policy-pdp-pap)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(fKuDV6aIQuGRibJsXm1jRw),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 17:06:12 kafka | [2024-10-31 17:04:06,149] INFO [Controller id=1] New partition creation callback for policy-pdp-pap-0 (kafka.controller.KafkaController) 17:06:12 kafka | [2024-10-31 17:04:06,151] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,151] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,155] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,155] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,225] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,227] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,228] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 1 become-leader and 0 become-follower partitions (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,230] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 1 partitions (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,231] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,231] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,236] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 1 partitions (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,237] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,238] INFO [Controller id=1] New topics: [Set(__consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(__consumer_offsets,Some(xR32WGU_TxqA0AadR2hqJw),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 17:06:12 kafka | [2024-10-31 17:04:06,238] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-37,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) 17:06:12 kafka | [2024-10-31 17:04:06,238] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,238] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,238] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,239] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,239] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,239] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,239] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,239] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,239] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,239] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,239] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,239] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,239] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,239] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,239] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,239] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,239] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,239] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,239] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,239] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,239] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,239] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,239] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,239] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,239] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,239] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,239] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,239] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,240] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,240] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,240] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,240] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,240] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,240] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,240] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,240] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,240] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,240] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,240] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,240] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,240] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,240] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,240] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,240] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,240] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,240] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,240] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,240] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,240] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,240] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,240] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,242] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,242] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,242] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,242] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,242] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,242] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,242] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,242] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,242] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,242] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,242] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,242] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,242] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,242] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,243] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,243] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,243] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,243] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,243] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,243] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,243] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,243] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,243] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,243] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,243] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,243] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,243] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,243] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,243] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,243] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,243] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,243] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,243] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,243] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,243] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,243] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,243] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,243] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,243] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,243] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,243] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,243] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,244] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,244] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,244] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,244] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,244] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,244] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,244] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,244] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,244] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,257] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(policy-pdp-pap-0) (kafka.server.ReplicaFetcherManager) 17:06:12 kafka | [2024-10-31 17:04:06,257] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 1 partitions (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,319] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:12 kafka | [2024-10-31 17:04:06,328] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) 17:06:12 kafka | [2024-10-31 17:04:06,330] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,331] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,333] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(fKuDV6aIQuGRibJsXm1jRw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,342] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,352] INFO [Broker id=1] Finished LeaderAndIsr request in 115ms correlationId 1 from controller 1 for 1 partitions (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,359] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=fKuDV6aIQuGRibJsXm1jRw, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,367] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,367] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,367] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,367] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,367] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,367] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,367] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,367] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,367] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,367] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,367] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,367] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,367] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,367] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,367] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,367] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,368] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,368] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,368] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,368] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,368] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,368] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,368] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,368] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,368] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,368] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,368] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,368] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,368] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,368] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,368] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,368] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,368] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,368] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,368] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,368] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,368] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,368] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,368] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,368] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,369] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,369] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,369] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,369] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,369] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,369] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,369] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,369] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,369] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,369] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,369] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,369] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,369] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,369] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,369] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,369] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,369] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,369] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,369] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,369] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,369] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,370] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,370] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,370] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,370] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,370] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,370] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,370] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,370] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,370] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,370] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,370] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,370] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,370] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,370] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,370] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,370] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,370] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,370] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,370] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,370] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,370] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,370] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,371] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,371] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,371] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,371] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,371] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,371] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,371] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,371] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,371] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,371] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,371] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,371] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,371] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,371] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,371] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,371] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,371] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,371] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 50 become-leader and 0 become-follower partitions (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,372] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 50 partitions (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,373] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,373] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,373] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,373] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,373] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,373] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,373] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,373] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,374] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,374] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,374] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,374] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,374] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,374] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,374] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,374] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,374] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,374] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,374] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,374] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,374] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,374] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,374] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,374] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,374] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,374] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,374] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,374] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,374] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,374] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,374] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,374] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,374] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,374] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,374] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,374] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,375] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,375] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,375] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,375] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,375] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,375] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,375] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,375] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,375] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,375] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,375] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,375] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,375] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,375] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,375] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,375] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,376] INFO [Broker id=1] Add 1 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,377] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,379] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 for 50 partitions (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,379] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,379] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,379] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,380] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,380] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,380] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,380] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,380] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,380] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,380] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,380] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,380] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,380] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,380] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,380] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,381] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,381] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,381] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,381] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,381] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,381] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,381] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,381] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,381] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,381] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,381] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,382] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,382] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,382] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,382] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,382] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,382] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,382] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,382] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,382] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,382] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,382] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,382] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,383] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,383] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,383] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,383] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,383] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,383] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,383] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,383] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,383] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,383] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,383] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,384] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,410] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,410] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,410] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,410] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,410] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,410] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,410] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,410] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,410] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,410] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,410] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,410] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,410] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,410] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,410] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,410] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,410] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,410] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,410] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,410] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,410] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,410] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,410] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,410] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,410] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,411] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,411] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,411] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,411] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,411] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,411] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,411] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,411] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,411] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,411] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,411] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,411] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,411] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,411] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,411] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,411] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,411] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,411] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,411] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,411] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,411] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,411] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,411] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,412] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,412] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,412] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-37, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) 17:06:12 kafka | [2024-10-31 17:04:06,412] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 3 from controller 1 epoch 1 as part of the become-leader transition for 50 partitions (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,423] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:12 kafka | [2024-10-31 17:04:06,424] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:12 kafka | [2024-10-31 17:04:06,425] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,425] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,425] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(xR32WGU_TxqA0AadR2hqJw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,436] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:12 kafka | [2024-10-31 17:04:06,438] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:12 kafka | [2024-10-31 17:04:06,439] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,439] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,439] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(xR32WGU_TxqA0AadR2hqJw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,456] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:12 kafka | [2024-10-31 17:04:06,456] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:12 kafka | [2024-10-31 17:04:06,456] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,457] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,457] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(xR32WGU_TxqA0AadR2hqJw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,467] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:12 kafka | [2024-10-31 17:04:06,467] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:12 kafka | [2024-10-31 17:04:06,467] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,467] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,468] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(xR32WGU_TxqA0AadR2hqJw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,473] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:12 kafka | [2024-10-31 17:04:06,474] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:12 kafka | [2024-10-31 17:04:06,474] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,474] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,474] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(xR32WGU_TxqA0AadR2hqJw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,487] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:12 kafka | [2024-10-31 17:04:06,487] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:12 kafka | [2024-10-31 17:04:06,487] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,487] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,488] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(xR32WGU_TxqA0AadR2hqJw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,498] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:12 kafka | [2024-10-31 17:04:06,499] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:12 kafka | [2024-10-31 17:04:06,499] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,499] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,499] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(xR32WGU_TxqA0AadR2hqJw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,515] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:12 kafka | [2024-10-31 17:04:06,516] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:12 kafka | [2024-10-31 17:04:06,516] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,516] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,517] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(xR32WGU_TxqA0AadR2hqJw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,523] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:12 kafka | [2024-10-31 17:04:06,524] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:12 kafka | [2024-10-31 17:04:06,524] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,524] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,524] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(xR32WGU_TxqA0AadR2hqJw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,530] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:12 kafka | [2024-10-31 17:04:06,530] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:12 kafka | [2024-10-31 17:04:06,531] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,531] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,531] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(xR32WGU_TxqA0AadR2hqJw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,573] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:12 kafka | [2024-10-31 17:04:06,574] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:12 kafka | [2024-10-31 17:04:06,574] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,574] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,574] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(xR32WGU_TxqA0AadR2hqJw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,587] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:12 kafka | [2024-10-31 17:04:06,588] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:12 kafka | [2024-10-31 17:04:06,588] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,588] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,588] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(xR32WGU_TxqA0AadR2hqJw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,596] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:12 kafka | [2024-10-31 17:04:06,600] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:12 kafka | [2024-10-31 17:04:06,600] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,600] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,600] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(xR32WGU_TxqA0AadR2hqJw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,618] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:12 kafka | [2024-10-31 17:04:06,624] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:12 kafka | [2024-10-31 17:04:06,624] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,624] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,628] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(xR32WGU_TxqA0AadR2hqJw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,636] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:12 kafka | [2024-10-31 17:04:06,637] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:12 kafka | [2024-10-31 17:04:06,637] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,637] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,637] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(xR32WGU_TxqA0AadR2hqJw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,644] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:12 kafka | [2024-10-31 17:04:06,645] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:12 kafka | [2024-10-31 17:04:06,645] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,645] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,646] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(xR32WGU_TxqA0AadR2hqJw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,651] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:12 kafka | [2024-10-31 17:04:06,651] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:12 kafka | [2024-10-31 17:04:06,652] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,652] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,652] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(xR32WGU_TxqA0AadR2hqJw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,657] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:12 kafka | [2024-10-31 17:04:06,657] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:12 kafka | [2024-10-31 17:04:06,658] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,658] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,658] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(xR32WGU_TxqA0AadR2hqJw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,665] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:12 kafka | [2024-10-31 17:04:06,666] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:12 kafka | [2024-10-31 17:04:06,666] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,666] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,666] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(xR32WGU_TxqA0AadR2hqJw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,672] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:12 kafka | [2024-10-31 17:04:06,673] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:12 kafka | [2024-10-31 17:04:06,673] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,675] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,675] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(xR32WGU_TxqA0AadR2hqJw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,683] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:12 kafka | [2024-10-31 17:04:06,683] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:12 kafka | [2024-10-31 17:04:06,683] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,685] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,685] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(xR32WGU_TxqA0AadR2hqJw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,693] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:12 kafka | [2024-10-31 17:04:06,693] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:12 kafka | [2024-10-31 17:04:06,693] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,693] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,693] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(xR32WGU_TxqA0AadR2hqJw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,700] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:12 kafka | [2024-10-31 17:04:06,701] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:12 kafka | [2024-10-31 17:04:06,701] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,701] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,701] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(xR32WGU_TxqA0AadR2hqJw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,708] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:12 kafka | [2024-10-31 17:04:06,709] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:12 kafka | [2024-10-31 17:04:06,709] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,709] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,709] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(xR32WGU_TxqA0AadR2hqJw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,716] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:12 kafka | [2024-10-31 17:04:06,717] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:12 kafka | [2024-10-31 17:04:06,717] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,717] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,717] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(xR32WGU_TxqA0AadR2hqJw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,723] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:12 kafka | [2024-10-31 17:04:06,723] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:12 kafka | [2024-10-31 17:04:06,723] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,726] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,726] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(xR32WGU_TxqA0AadR2hqJw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,731] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:12 kafka | [2024-10-31 17:04:06,731] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:12 kafka | [2024-10-31 17:04:06,731] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,731] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,731] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(xR32WGU_TxqA0AadR2hqJw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,738] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:12 kafka | [2024-10-31 17:04:06,738] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:12 kafka | [2024-10-31 17:04:06,738] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,738] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,738] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(xR32WGU_TxqA0AadR2hqJw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,751] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:12 kafka | [2024-10-31 17:04:06,752] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:12 kafka | [2024-10-31 17:04:06,752] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,752] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,752] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(xR32WGU_TxqA0AadR2hqJw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,763] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:12 kafka | [2024-10-31 17:04:06,764] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:12 kafka | [2024-10-31 17:04:06,764] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,764] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,764] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(xR32WGU_TxqA0AadR2hqJw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,773] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:12 kafka | [2024-10-31 17:04:06,773] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:12 kafka | [2024-10-31 17:04:06,773] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,773] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,774] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(xR32WGU_TxqA0AadR2hqJw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,784] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:12 kafka | [2024-10-31 17:04:06,785] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:12 kafka | [2024-10-31 17:04:06,787] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,787] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,788] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(xR32WGU_TxqA0AadR2hqJw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,800] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:12 kafka | [2024-10-31 17:04:06,801] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:12 kafka | [2024-10-31 17:04:06,802] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,803] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,803] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(xR32WGU_TxqA0AadR2hqJw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,812] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:12 kafka | [2024-10-31 17:04:06,815] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:12 kafka | [2024-10-31 17:04:06,815] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,815] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,815] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(xR32WGU_TxqA0AadR2hqJw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,824] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:12 kafka | [2024-10-31 17:04:06,828] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:12 kafka | [2024-10-31 17:04:06,828] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,828] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,828] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(xR32WGU_TxqA0AadR2hqJw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,836] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:12 kafka | [2024-10-31 17:04:06,836] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:12 kafka | [2024-10-31 17:04:06,836] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,836] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,837] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(xR32WGU_TxqA0AadR2hqJw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,845] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:12 kafka | [2024-10-31 17:04:06,846] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:12 kafka | [2024-10-31 17:04:06,846] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,846] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,847] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(xR32WGU_TxqA0AadR2hqJw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,853] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:12 kafka | [2024-10-31 17:04:06,854] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:12 kafka | [2024-10-31 17:04:06,854] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,854] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,854] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(xR32WGU_TxqA0AadR2hqJw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,864] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:12 kafka | [2024-10-31 17:04:06,865] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:12 kafka | [2024-10-31 17:04:06,865] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,865] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,865] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(xR32WGU_TxqA0AadR2hqJw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,875] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:12 kafka | [2024-10-31 17:04:06,876] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:12 kafka | [2024-10-31 17:04:06,876] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,876] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,876] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(xR32WGU_TxqA0AadR2hqJw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,884] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:12 kafka | [2024-10-31 17:04:06,886] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:12 kafka | [2024-10-31 17:04:06,886] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,886] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,886] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(xR32WGU_TxqA0AadR2hqJw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,924] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:12 kafka | [2024-10-31 17:04:06,925] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:12 kafka | [2024-10-31 17:04:06,925] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,925] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,926] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(xR32WGU_TxqA0AadR2hqJw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,931] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:12 kafka | [2024-10-31 17:04:06,932] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:12 kafka | [2024-10-31 17:04:06,932] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,932] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,932] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(xR32WGU_TxqA0AadR2hqJw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,938] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:12 kafka | [2024-10-31 17:04:06,939] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:12 kafka | [2024-10-31 17:04:06,939] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,939] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,939] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(xR32WGU_TxqA0AadR2hqJw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,945] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:12 kafka | [2024-10-31 17:04:06,946] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:12 kafka | [2024-10-31 17:04:06,946] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,946] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,946] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(xR32WGU_TxqA0AadR2hqJw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,953] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:12 kafka | [2024-10-31 17:04:06,953] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:12 kafka | [2024-10-31 17:04:06,954] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,954] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,954] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(xR32WGU_TxqA0AadR2hqJw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,960] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:12 kafka | [2024-10-31 17:04:06,961] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:12 kafka | [2024-10-31 17:04:06,961] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,961] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,961] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(xR32WGU_TxqA0AadR2hqJw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,968] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:12 kafka | [2024-10-31 17:04:06,969] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:12 kafka | [2024-10-31 17:04:06,969] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,969] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,969] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(xR32WGU_TxqA0AadR2hqJw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,975] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:12 kafka | [2024-10-31 17:04:06,976] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:12 kafka | [2024-10-31 17:04:06,976] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,976] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,977] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(xR32WGU_TxqA0AadR2hqJw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,984] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:12 kafka | [2024-10-31 17:04:06,984] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:12 kafka | [2024-10-31 17:04:06,984] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,984] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) 17:06:12 kafka | [2024-10-31 17:04:06,985] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(xR32WGU_TxqA0AadR2hqJw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,988] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,988] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,988] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,988] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,988] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,989] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,989] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,989] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,989] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,989] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,989] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,989] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,989] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,989] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,989] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,990] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,990] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,990] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,990] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,990] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,990] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,990] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,990] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,990] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,991] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,991] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,991] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,991] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,991] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,991] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,991] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,991] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,992] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,992] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,992] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,992] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,992] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,992] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,992] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,992] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,992] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,992] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,992] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,993] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,993] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,993] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,993] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,993] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,993] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,993] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:06,995] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:12 kafka | [2024-10-31 17:04:06,996] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:06,998] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:12 kafka | [2024-10-31 17:04:06,998] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:06,998] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:12 kafka | [2024-10-31 17:04:06,998] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:06,998] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:12 kafka | [2024-10-31 17:04:06,998] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:06,998] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:12 kafka | [2024-10-31 17:04:06,999] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:06,999] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:12 kafka | [2024-10-31 17:04:06,999] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:06,999] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:12 kafka | [2024-10-31 17:04:06,999] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:06,999] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:12 kafka | [2024-10-31 17:04:06,999] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:06,999] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:12 kafka | [2024-10-31 17:04:07,000] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,000] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:12 kafka | [2024-10-31 17:04:07,000] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,000] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:12 kafka | [2024-10-31 17:04:07,000] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,000] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:12 kafka | [2024-10-31 17:04:07,000] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,000] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:12 kafka | [2024-10-31 17:04:07,000] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,000] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:12 kafka | [2024-10-31 17:04:07,000] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,001] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:12 kafka | [2024-10-31 17:04:07,001] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,001] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:12 kafka | [2024-10-31 17:04:07,001] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,001] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:12 kafka | [2024-10-31 17:04:07,001] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,001] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:12 kafka | [2024-10-31 17:04:07,001] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,001] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:12 kafka | [2024-10-31 17:04:07,001] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,001] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:12 kafka | [2024-10-31 17:04:07,001] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,001] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:12 kafka | [2024-10-31 17:04:07,002] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,002] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:12 kafka | [2024-10-31 17:04:07,002] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,002] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:12 kafka | [2024-10-31 17:04:07,002] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,002] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:12 kafka | [2024-10-31 17:04:07,002] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,002] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:12 kafka | [2024-10-31 17:04:07,002] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,002] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:12 kafka | [2024-10-31 17:04:07,002] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,003] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:12 kafka | [2024-10-31 17:04:07,003] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,003] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:12 kafka | [2024-10-31 17:04:07,003] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,003] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:12 kafka | [2024-10-31 17:04:07,003] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,003] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:12 kafka | [2024-10-31 17:04:07,003] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,003] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:12 kafka | [2024-10-31 17:04:07,003] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,003] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:12 kafka | [2024-10-31 17:04:07,003] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,003] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:12 kafka | [2024-10-31 17:04:07,004] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,004] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:12 kafka | [2024-10-31 17:04:07,004] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,004] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:12 kafka | [2024-10-31 17:04:07,004] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,004] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:12 kafka | [2024-10-31 17:04:07,004] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,004] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:12 kafka | [2024-10-31 17:04:07,004] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,004] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:12 kafka | [2024-10-31 17:04:07,004] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,007] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:12 kafka | [2024-10-31 17:04:07,007] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,007] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:12 kafka | [2024-10-31 17:04:07,007] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,007] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:12 kafka | [2024-10-31 17:04:07,008] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,008] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:12 kafka | [2024-10-31 17:04:07,008] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,008] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:12 kafka | [2024-10-31 17:04:07,008] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,008] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:12 kafka | [2024-10-31 17:04:07,008] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,008] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:12 kafka | [2024-10-31 17:04:07,008] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,008] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:12 kafka | [2024-10-31 17:04:07,008] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,009] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:12 kafka | [2024-10-31 17:04:07,009] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,009] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:12 kafka | [2024-10-31 17:04:07,009] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,009] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:12 kafka | [2024-10-31 17:04:07,009] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,009] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:12 kafka | [2024-10-31 17:04:07,009] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,009] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 10 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,010] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,010] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,010] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,010] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,010] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,010] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,010] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,010] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,010] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,011] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,011] INFO [Broker id=1] Finished LeaderAndIsr request in 631ms correlationId 3 from controller 1 for 50 partitions (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:07,012] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 12 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,012] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,012] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,012] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,012] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,012] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,012] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,012] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,013] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,013] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,013] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,013] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,013] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,013] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,013] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,013] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,013] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,013] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,014] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 11 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,014] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,014] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,014] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,014] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,014] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,014] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,015] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,015] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,015] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,015] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,015] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,015] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,015] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,016] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 8 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,016] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,016] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,016] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,016] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,016] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,016] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:12 kafka | [2024-10-31 17:04:07,020] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=xR32WGU_TxqA0AadR2hqJw, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 3 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:07,024] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:07,024] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:07,024] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:07,024] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:07,024] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:07,024] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:07,024] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:07,024] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:07,024] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:07,024] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:07,024] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:07,024] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:07,024] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:07,024] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:07,024] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:07,025] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:07,025] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:07,025] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:07,025] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:07,025] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:07,025] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:07,025] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:07,025] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:07,025] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:07,025] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:07,025] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:07,025] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:07,025] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:07,025] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:07,025] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:07,025] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:07,025] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:07,025] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:07,025] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:07,025] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:07,025] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:07,026] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:07,026] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:07,026] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:07,026] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:07,026] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:07,026] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:07,026] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:07,026] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:07,026] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:07,026] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:07,026] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:07,026] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:07,026] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:07,026] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:07,026] INFO [Broker id=1] Add 50 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:07,027] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 4 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 17:06:12 kafka | [2024-10-31 17:04:07,112] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-fad83ed3-0469-4b79-889f-bc62d6959120 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 17:06:12 kafka | [2024-10-31 17:04:07,113] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group c5240df4-4957-4aee-bcf1-b1765fb43c2f in Empty state. Created a new member id consumer-c5240df4-4957-4aee-bcf1-b1765fb43c2f-3-cd775aa6-61a0-4c5c-a748-f96b87fb4e5f and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 17:06:12 kafka | [2024-10-31 17:04:07,123] INFO [GroupCoordinator 1]: Preparing to rebalance group c5240df4-4957-4aee-bcf1-b1765fb43c2f in state PreparingRebalance with old generation 0 (__consumer_offsets-17) (reason: Adding new member consumer-c5240df4-4957-4aee-bcf1-b1765fb43c2f-3-cd775aa6-61a0-4c5c-a748-f96b87fb4e5f with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 17:06:12 kafka | [2024-10-31 17:04:07,124] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-fad83ed3-0469-4b79-889f-bc62d6959120 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 17:06:12 kafka | [2024-10-31 17:04:07,879] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group d237d326-42e3-4b36-a9eb-fb715dab21ef in Empty state. Created a new member id consumer-d237d326-42e3-4b36-a9eb-fb715dab21ef-2-77c6b742-c550-4f47-b781-c7bb0e7760df and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 17:06:12 kafka | [2024-10-31 17:04:07,883] INFO [GroupCoordinator 1]: Preparing to rebalance group d237d326-42e3-4b36-a9eb-fb715dab21ef in state PreparingRebalance with old generation 0 (__consumer_offsets-15) (reason: Adding new member consumer-d237d326-42e3-4b36-a9eb-fb715dab21ef-2-77c6b742-c550-4f47-b781-c7bb0e7760df with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 17:06:12 kafka | [2024-10-31 17:04:10,132] INFO [GroupCoordinator 1]: Stabilized group c5240df4-4957-4aee-bcf1-b1765fb43c2f generation 1 (__consumer_offsets-17) with 1 members (kafka.coordinator.group.GroupCoordinator) 17:06:12 kafka | [2024-10-31 17:04:10,138] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) 17:06:12 kafka | [2024-10-31 17:04:10,196] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-fad83ed3-0469-4b79-889f-bc62d6959120 for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 17:06:12 kafka | [2024-10-31 17:04:10,196] INFO [GroupCoordinator 1]: Assignment received from leader consumer-c5240df4-4957-4aee-bcf1-b1765fb43c2f-3-cd775aa6-61a0-4c5c-a748-f96b87fb4e5f for group c5240df4-4957-4aee-bcf1-b1765fb43c2f for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 17:06:12 kafka | [2024-10-31 17:04:10,884] INFO [GroupCoordinator 1]: Stabilized group d237d326-42e3-4b36-a9eb-fb715dab21ef generation 1 (__consumer_offsets-15) with 1 members (kafka.coordinator.group.GroupCoordinator) 17:06:12 kafka | [2024-10-31 17:04:10,903] INFO [GroupCoordinator 1]: Assignment received from leader consumer-d237d326-42e3-4b36-a9eb-fb715dab21ef-2-77c6b742-c550-4f47-b781-c7bb0e7760df for group d237d326-42e3-4b36-a9eb-fb715dab21ef for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 17:06:12 =================================== 17:06:12 ======== Logs from mariadb ======== 17:06:12 mariadb | 2024-10-31 17:03:28+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 17:06:12 mariadb | 2024-10-31 17:03:28+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' 17:06:12 mariadb | 2024-10-31 17:03:28+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 17:06:12 mariadb | 2024-10-31 17:03:28+00:00 [Note] [Entrypoint]: Initializing database files 17:06:12 mariadb | 2024-10-31 17:03:29 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 17:06:12 mariadb | 2024-10-31 17:03:29 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 17:06:12 mariadb | 2024-10-31 17:03:29 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 17:06:12 mariadb | 17:06:12 mariadb | 17:06:12 mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! 17:06:12 mariadb | To do so, start the server, then issue the following command: 17:06:12 mariadb | 17:06:12 mariadb | '/usr/bin/mysql_secure_installation' 17:06:12 mariadb | 17:06:12 mariadb | which will also give you the option of removing the test 17:06:12 mariadb | databases and anonymous user created by default. This is 17:06:12 mariadb | strongly recommended for production servers. 17:06:12 mariadb | 17:06:12 mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb 17:06:12 mariadb | 17:06:12 mariadb | Please report any problems at https://mariadb.org/jira 17:06:12 mariadb | 17:06:12 mariadb | The latest information about MariaDB is available at https://mariadb.org/. 17:06:12 mariadb | 17:06:12 mariadb | Consider joining MariaDB's strong and vibrant community: 17:06:12 mariadb | https://mariadb.org/get-involved/ 17:06:12 mariadb | 17:06:12 mariadb | 2024-10-31 17:03:30+00:00 [Note] [Entrypoint]: Database files initialized 17:06:12 mariadb | 2024-10-31 17:03:30+00:00 [Note] [Entrypoint]: Starting temporary server 17:06:12 mariadb | 2024-10-31 17:03:30+00:00 [Note] [Entrypoint]: Waiting for server startup 17:06:12 mariadb | 2024-10-31 17:03:30 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 96 ... 17:06:12 mariadb | 2024-10-31 17:03:30 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 17:06:12 mariadb | 2024-10-31 17:03:30 0 [Note] InnoDB: Number of transaction pools: 1 17:06:12 mariadb | 2024-10-31 17:03:30 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 17:06:12 mariadb | 2024-10-31 17:03:30 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 17:06:12 mariadb | 2024-10-31 17:03:30 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 17:06:12 mariadb | 2024-10-31 17:03:30 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 17:06:12 mariadb | 2024-10-31 17:03:30 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 17:06:12 mariadb | 2024-10-31 17:03:30 0 [Note] InnoDB: Completed initialization of buffer pool 17:06:12 mariadb | 2024-10-31 17:03:30 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 17:06:12 mariadb | 2024-10-31 17:03:30 0 [Note] InnoDB: 128 rollback segments are active. 17:06:12 mariadb | 2024-10-31 17:03:30 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 17:06:12 mariadb | 2024-10-31 17:03:30 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 17:06:12 mariadb | 2024-10-31 17:03:30 0 [Note] InnoDB: log sequence number 46590; transaction id 14 17:06:12 mariadb | 2024-10-31 17:03:30 0 [Note] Plugin 'FEEDBACK' is disabled. 17:06:12 mariadb | 2024-10-31 17:03:30 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 17:06:12 mariadb | 2024-10-31 17:03:30 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. 17:06:12 mariadb | 2024-10-31 17:03:30 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. 17:06:12 mariadb | 2024-10-31 17:03:30 0 [Note] mariadbd: ready for connections. 17:06:12 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution 17:06:12 mariadb | 2024-10-31 17:03:31+00:00 [Note] [Entrypoint]: Temporary server started. 17:06:12 mariadb | 2024-10-31 17:03:33+00:00 [Note] [Entrypoint]: Creating user policy_user 17:06:12 mariadb | 2024-10-31 17:03:33+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) 17:06:12 mariadb | 17:06:12 mariadb | 2024-10-31 17:03:33+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf 17:06:12 mariadb | 17:06:12 mariadb | 2024-10-31 17:03:33+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh 17:06:12 mariadb | #!/bin/bash -xv 17:06:12 mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved 17:06:12 mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. 17:06:12 mariadb | # 17:06:12 mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); 17:06:12 mariadb | # you may not use this file except in compliance with the License. 17:06:12 mariadb | # You may obtain a copy of the License at 17:06:12 mariadb | # 17:06:12 mariadb | # http://www.apache.org/licenses/LICENSE-2.0 17:06:12 mariadb | # 17:06:12 mariadb | # Unless required by applicable law or agreed to in writing, software 17:06:12 mariadb | # distributed under the License is distributed on an "AS IS" BASIS, 17:06:12 mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 17:06:12 mariadb | # See the License for the specific language governing permissions and 17:06:12 mariadb | # limitations under the License. 17:06:12 mariadb | 17:06:12 mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp 17:06:12 mariadb | do 17:06:12 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" 17:06:12 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" 17:06:12 mariadb | done 17:06:12 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 17:06:12 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' 17:06:12 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' 17:06:12 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 17:06:12 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' 17:06:12 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' 17:06:12 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 17:06:12 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' 17:06:12 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' 17:06:12 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 17:06:12 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' 17:06:12 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' 17:06:12 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 17:06:12 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' 17:06:12 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' 17:06:12 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 17:06:12 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' 17:06:12 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' 17:06:12 mariadb | 17:06:12 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" 17:06:12 mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' 17:06:12 mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql 17:06:12 mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp 17:06:12 mariadb | 17:06:12 mariadb | 2024-10-31 17:03:34+00:00 [Note] [Entrypoint]: Stopping temporary server 17:06:12 mariadb | 2024-10-31 17:03:34 0 [Note] mariadbd (initiated by: unknown): Normal shutdown 17:06:12 mariadb | 2024-10-31 17:03:34 0 [Note] InnoDB: FTS optimize thread exiting. 17:06:12 mariadb | 2024-10-31 17:03:34 0 [Note] InnoDB: Starting shutdown... 17:06:12 mariadb | 2024-10-31 17:03:34 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool 17:06:12 mariadb | 2024-10-31 17:03:34 0 [Note] InnoDB: Buffer pool(s) dump completed at 241031 17:03:34 17:06:12 mariadb | 2024-10-31 17:03:34 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" 17:06:12 mariadb | 2024-10-31 17:03:34 0 [Note] InnoDB: Shutdown completed; log sequence number 332748; transaction id 298 17:06:12 mariadb | 2024-10-31 17:03:34 0 [Note] mariadbd: Shutdown complete 17:06:12 mariadb | 17:06:12 mariadb | 2024-10-31 17:03:34+00:00 [Note] [Entrypoint]: Temporary server stopped 17:06:12 mariadb | 17:06:12 mariadb | 2024-10-31 17:03:34+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. 17:06:12 mariadb | 17:06:12 mariadb | 2024-10-31 17:03:34 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... 17:06:12 mariadb | 2024-10-31 17:03:34 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 17:06:12 mariadb | 2024-10-31 17:03:34 0 [Note] InnoDB: Number of transaction pools: 1 17:06:12 mariadb | 2024-10-31 17:03:34 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 17:06:12 mariadb | 2024-10-31 17:03:34 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 17:06:12 mariadb | 2024-10-31 17:03:34 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 17:06:12 mariadb | 2024-10-31 17:03:34 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 17:06:12 mariadb | 2024-10-31 17:03:34 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 17:06:12 mariadb | 2024-10-31 17:03:34 0 [Note] InnoDB: Completed initialization of buffer pool 17:06:12 mariadb | 2024-10-31 17:03:34 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 17:06:12 mariadb | 2024-10-31 17:03:34 0 [Note] InnoDB: 128 rollback segments are active. 17:06:12 mariadb | 2024-10-31 17:03:34 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 17:06:12 mariadb | 2024-10-31 17:03:34 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 17:06:12 mariadb | 2024-10-31 17:03:34 0 [Note] InnoDB: log sequence number 332748; transaction id 299 17:06:12 mariadb | 2024-10-31 17:03:34 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool 17:06:12 mariadb | 2024-10-31 17:03:34 0 [Note] Plugin 'FEEDBACK' is disabled. 17:06:12 mariadb | 2024-10-31 17:03:34 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 17:06:12 mariadb | 2024-10-31 17:03:34 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. 17:06:12 mariadb | 2024-10-31 17:03:34 0 [Note] Server socket created on IP: '0.0.0.0'. 17:06:12 mariadb | 2024-10-31 17:03:34 0 [Note] Server socket created on IP: '::'. 17:06:12 mariadb | 2024-10-31 17:03:34 0 [Note] mariadbd: ready for connections. 17:06:12 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution 17:06:12 mariadb | 2024-10-31 17:03:34 0 [Note] InnoDB: Buffer pool(s) load completed at 241031 17:03:34 17:06:12 mariadb | 2024-10-31 17:03:35 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.11' (This connection closed normally without authentication) 17:06:12 mariadb | 2024-10-31 17:03:35 4 [Warning] Aborted connection 4 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.8' (This connection closed normally without authentication) 17:06:12 mariadb | 2024-10-31 17:03:35 5 [Warning] Aborted connection 5 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.10' (This connection closed normally without authentication) 17:06:12 mariadb | 2024-10-31 17:03:35 6 [Warning] Aborted connection 6 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.7' (This connection closed normally without authentication) 17:06:12 =================================== 17:06:12 ======== Logs from apex-pdp ======== 17:06:12 policy-apex-pdp | Waiting for mariadb port 3306... 17:06:12 policy-apex-pdp | mariadb (172.17.0.2:3306) open 17:06:12 policy-apex-pdp | Waiting for kafka port 9092... 17:06:12 policy-apex-pdp | kafka (172.17.0.9:9092) open 17:06:12 policy-apex-pdp | Waiting for pap port 6969... 17:06:12 policy-apex-pdp | pap (172.17.0.10:6969) open 17:06:12 policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' 17:06:12 policy-apex-pdp | [2024-10-31T17:04:07.074+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] 17:06:12 policy-apex-pdp | [2024-10-31T17:04:07.257+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 17:06:12 policy-apex-pdp | allow.auto.create.topics = true 17:06:12 policy-apex-pdp | auto.commit.interval.ms = 5000 17:06:12 policy-apex-pdp | auto.include.jmx.reporter = true 17:06:12 policy-apex-pdp | auto.offset.reset = latest 17:06:12 policy-apex-pdp | bootstrap.servers = [kafka:9092] 17:06:12 policy-apex-pdp | check.crcs = true 17:06:12 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 17:06:12 policy-apex-pdp | client.id = consumer-d237d326-42e3-4b36-a9eb-fb715dab21ef-1 17:06:12 policy-apex-pdp | client.rack = 17:06:12 policy-apex-pdp | connections.max.idle.ms = 540000 17:06:12 policy-apex-pdp | default.api.timeout.ms = 60000 17:06:12 policy-apex-pdp | enable.auto.commit = true 17:06:12 policy-apex-pdp | exclude.internal.topics = true 17:06:12 policy-apex-pdp | fetch.max.bytes = 52428800 17:06:12 policy-apex-pdp | fetch.max.wait.ms = 500 17:06:12 policy-apex-pdp | fetch.min.bytes = 1 17:06:12 policy-apex-pdp | group.id = d237d326-42e3-4b36-a9eb-fb715dab21ef 17:06:12 policy-apex-pdp | group.instance.id = null 17:06:12 policy-apex-pdp | heartbeat.interval.ms = 3000 17:06:12 policy-apex-pdp | interceptor.classes = [] 17:06:12 policy-apex-pdp | internal.leave.group.on.close = true 17:06:12 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 17:06:12 policy-apex-pdp | isolation.level = read_uncommitted 17:06:12 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:06:12 policy-apex-pdp | max.partition.fetch.bytes = 1048576 17:06:12 policy-apex-pdp | max.poll.interval.ms = 300000 17:06:12 policy-apex-pdp | max.poll.records = 500 17:06:12 policy-apex-pdp | metadata.max.age.ms = 300000 17:06:12 policy-apex-pdp | metric.reporters = [] 17:06:12 policy-apex-pdp | metrics.num.samples = 2 17:06:12 policy-apex-pdp | metrics.recording.level = INFO 17:06:12 policy-apex-pdp | metrics.sample.window.ms = 30000 17:06:12 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 17:06:12 policy-apex-pdp | receive.buffer.bytes = 65536 17:06:12 policy-apex-pdp | reconnect.backoff.max.ms = 1000 17:06:12 policy-apex-pdp | reconnect.backoff.ms = 50 17:06:12 policy-apex-pdp | request.timeout.ms = 30000 17:06:12 policy-apex-pdp | retry.backoff.ms = 100 17:06:12 policy-apex-pdp | sasl.client.callback.handler.class = null 17:06:12 policy-apex-pdp | sasl.jaas.config = null 17:06:12 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 17:06:12 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 17:06:12 policy-apex-pdp | sasl.kerberos.service.name = null 17:06:12 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 17:06:12 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 17:06:12 policy-apex-pdp | sasl.login.callback.handler.class = null 17:06:12 policy-apex-pdp | sasl.login.class = null 17:06:12 policy-apex-pdp | sasl.login.connect.timeout.ms = null 17:06:12 policy-apex-pdp | sasl.login.read.timeout.ms = null 17:06:12 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 17:06:12 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 17:06:12 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 17:06:12 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 17:06:12 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 17:06:12 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 17:06:12 policy-apex-pdp | sasl.mechanism = GSSAPI 17:06:12 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 17:06:12 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 17:06:12 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 17:06:12 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 17:06:12 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 17:06:12 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 17:06:12 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 17:06:12 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 17:06:12 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 17:06:12 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 17:06:12 policy-apex-pdp | security.protocol = PLAINTEXT 17:06:12 policy-apex-pdp | security.providers = null 17:06:12 policy-apex-pdp | send.buffer.bytes = 131072 17:06:12 policy-apex-pdp | session.timeout.ms = 45000 17:06:12 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 17:06:12 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 17:06:12 policy-apex-pdp | ssl.cipher.suites = null 17:06:12 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 17:06:12 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 17:06:12 policy-apex-pdp | ssl.engine.factory.class = null 17:06:12 policy-apex-pdp | ssl.key.password = null 17:06:12 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 17:06:12 policy-apex-pdp | ssl.keystore.certificate.chain = null 17:06:12 policy-apex-pdp | ssl.keystore.key = null 17:06:12 policy-apex-pdp | ssl.keystore.location = null 17:06:12 policy-apex-pdp | ssl.keystore.password = null 17:06:12 policy-apex-pdp | ssl.keystore.type = JKS 17:06:12 policy-apex-pdp | ssl.protocol = TLSv1.3 17:06:12 policy-apex-pdp | ssl.provider = null 17:06:12 policy-apex-pdp | ssl.secure.random.implementation = null 17:06:12 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 17:06:12 policy-apex-pdp | ssl.truststore.certificates = null 17:06:12 policy-apex-pdp | ssl.truststore.location = null 17:06:12 policy-apex-pdp | ssl.truststore.password = null 17:06:12 policy-apex-pdp | ssl.truststore.type = JKS 17:06:12 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:06:12 policy-apex-pdp | 17:06:12 policy-apex-pdp | [2024-10-31T17:04:07.429+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 17:06:12 policy-apex-pdp | [2024-10-31T17:04:07.430+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 17:06:12 policy-apex-pdp | [2024-10-31T17:04:07.430+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1730394247427 17:06:12 policy-apex-pdp | [2024-10-31T17:04:07.433+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-d237d326-42e3-4b36-a9eb-fb715dab21ef-1, groupId=d237d326-42e3-4b36-a9eb-fb715dab21ef] Subscribed to topic(s): policy-pdp-pap 17:06:12 policy-apex-pdp | [2024-10-31T17:04:07.447+00:00|INFO|ServiceManager|main] service manager starting 17:06:12 policy-apex-pdp | [2024-10-31T17:04:07.447+00:00|INFO|ServiceManager|main] service manager starting topics 17:06:12 policy-apex-pdp | [2024-10-31T17:04:07.448+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=d237d326-42e3-4b36-a9eb-fb715dab21ef, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting 17:06:12 policy-apex-pdp | [2024-10-31T17:04:07.469+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 17:06:12 policy-apex-pdp | allow.auto.create.topics = true 17:06:12 policy-apex-pdp | auto.commit.interval.ms = 5000 17:06:12 policy-apex-pdp | auto.include.jmx.reporter = true 17:06:12 policy-apex-pdp | auto.offset.reset = latest 17:06:12 policy-apex-pdp | bootstrap.servers = [kafka:9092] 17:06:12 policy-apex-pdp | check.crcs = true 17:06:12 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 17:06:12 policy-apex-pdp | client.id = consumer-d237d326-42e3-4b36-a9eb-fb715dab21ef-2 17:06:12 policy-apex-pdp | client.rack = 17:06:12 policy-apex-pdp | connections.max.idle.ms = 540000 17:06:12 policy-apex-pdp | default.api.timeout.ms = 60000 17:06:12 policy-apex-pdp | enable.auto.commit = true 17:06:12 policy-apex-pdp | exclude.internal.topics = true 17:06:12 policy-apex-pdp | fetch.max.bytes = 52428800 17:06:12 policy-apex-pdp | fetch.max.wait.ms = 500 17:06:12 policy-apex-pdp | fetch.min.bytes = 1 17:06:12 policy-apex-pdp | group.id = d237d326-42e3-4b36-a9eb-fb715dab21ef 17:06:12 policy-apex-pdp | group.instance.id = null 17:06:12 policy-apex-pdp | heartbeat.interval.ms = 3000 17:06:12 policy-apex-pdp | interceptor.classes = [] 17:06:12 policy-apex-pdp | internal.leave.group.on.close = true 17:06:12 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 17:06:12 policy-apex-pdp | isolation.level = read_uncommitted 17:06:12 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:06:12 policy-apex-pdp | max.partition.fetch.bytes = 1048576 17:06:12 policy-apex-pdp | max.poll.interval.ms = 300000 17:06:12 policy-apex-pdp | max.poll.records = 500 17:06:12 policy-apex-pdp | metadata.max.age.ms = 300000 17:06:12 policy-apex-pdp | metric.reporters = [] 17:06:12 policy-apex-pdp | metrics.num.samples = 2 17:06:12 policy-apex-pdp | metrics.recording.level = INFO 17:06:12 policy-apex-pdp | metrics.sample.window.ms = 30000 17:06:12 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 17:06:12 policy-apex-pdp | receive.buffer.bytes = 65536 17:06:12 policy-apex-pdp | reconnect.backoff.max.ms = 1000 17:06:12 policy-apex-pdp | reconnect.backoff.ms = 50 17:06:12 policy-apex-pdp | request.timeout.ms = 30000 17:06:12 policy-apex-pdp | retry.backoff.ms = 100 17:06:12 policy-apex-pdp | sasl.client.callback.handler.class = null 17:06:12 policy-apex-pdp | sasl.jaas.config = null 17:06:12 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 17:06:12 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 17:06:12 policy-apex-pdp | sasl.kerberos.service.name = null 17:06:12 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 17:06:12 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 17:06:12 policy-apex-pdp | sasl.login.callback.handler.class = null 17:06:12 policy-apex-pdp | sasl.login.class = null 17:06:12 policy-apex-pdp | sasl.login.connect.timeout.ms = null 17:06:12 policy-apex-pdp | sasl.login.read.timeout.ms = null 17:06:12 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 17:06:12 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 17:06:12 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 17:06:12 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 17:06:12 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 17:06:12 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 17:06:12 policy-apex-pdp | sasl.mechanism = GSSAPI 17:06:12 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 17:06:12 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 17:06:12 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 17:06:12 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 17:06:12 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 17:06:12 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 17:06:12 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 17:06:12 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 17:06:12 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 17:06:12 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 17:06:12 policy-apex-pdp | security.protocol = PLAINTEXT 17:06:12 policy-apex-pdp | security.providers = null 17:06:12 policy-apex-pdp | send.buffer.bytes = 131072 17:06:12 policy-apex-pdp | session.timeout.ms = 45000 17:06:12 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 17:06:12 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 17:06:12 policy-apex-pdp | ssl.cipher.suites = null 17:06:12 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 17:06:12 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 17:06:12 policy-apex-pdp | ssl.engine.factory.class = null 17:06:12 policy-apex-pdp | ssl.key.password = null 17:06:12 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 17:06:12 policy-apex-pdp | ssl.keystore.certificate.chain = null 17:06:12 policy-apex-pdp | ssl.keystore.key = null 17:06:12 policy-apex-pdp | ssl.keystore.location = null 17:06:12 policy-apex-pdp | ssl.keystore.password = null 17:06:12 policy-apex-pdp | ssl.keystore.type = JKS 17:06:12 policy-apex-pdp | ssl.protocol = TLSv1.3 17:06:12 policy-apex-pdp | ssl.provider = null 17:06:12 policy-apex-pdp | ssl.secure.random.implementation = null 17:06:12 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 17:06:12 policy-apex-pdp | ssl.truststore.certificates = null 17:06:12 policy-apex-pdp | ssl.truststore.location = null 17:06:12 policy-apex-pdp | ssl.truststore.password = null 17:06:12 policy-apex-pdp | ssl.truststore.type = JKS 17:06:12 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:06:12 policy-apex-pdp | 17:06:12 policy-apex-pdp | [2024-10-31T17:04:07.478+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 17:06:12 policy-apex-pdp | [2024-10-31T17:04:07.478+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 17:06:12 policy-apex-pdp | [2024-10-31T17:04:07.478+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1730394247478 17:06:12 policy-apex-pdp | [2024-10-31T17:04:07.479+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-d237d326-42e3-4b36-a9eb-fb715dab21ef-2, groupId=d237d326-42e3-4b36-a9eb-fb715dab21ef] Subscribed to topic(s): policy-pdp-pap 17:06:12 policy-apex-pdp | [2024-10-31T17:04:07.479+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=3166d225-e34e-4991-a92d-ce7a5ca681fb, alive=false, publisher=null]]: starting 17:06:12 policy-apex-pdp | [2024-10-31T17:04:07.493+00:00|INFO|ProducerConfig|main] ProducerConfig values: 17:06:12 policy-apex-pdp | acks = -1 17:06:12 policy-apex-pdp | auto.include.jmx.reporter = true 17:06:12 policy-apex-pdp | batch.size = 16384 17:06:12 policy-apex-pdp | bootstrap.servers = [kafka:9092] 17:06:12 policy-apex-pdp | buffer.memory = 33554432 17:06:12 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 17:06:12 policy-apex-pdp | client.id = producer-1 17:06:12 policy-apex-pdp | compression.type = none 17:06:12 policy-apex-pdp | connections.max.idle.ms = 540000 17:06:12 policy-apex-pdp | delivery.timeout.ms = 120000 17:06:12 policy-apex-pdp | enable.idempotence = true 17:06:12 policy-apex-pdp | interceptor.classes = [] 17:06:12 policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 17:06:12 policy-apex-pdp | linger.ms = 0 17:06:12 policy-apex-pdp | max.block.ms = 60000 17:06:12 policy-apex-pdp | max.in.flight.requests.per.connection = 5 17:06:12 policy-apex-pdp | max.request.size = 1048576 17:06:12 policy-apex-pdp | metadata.max.age.ms = 300000 17:06:12 policy-apex-pdp | metadata.max.idle.ms = 300000 17:06:12 policy-apex-pdp | metric.reporters = [] 17:06:12 policy-apex-pdp | metrics.num.samples = 2 17:06:12 policy-apex-pdp | metrics.recording.level = INFO 17:06:12 policy-apex-pdp | metrics.sample.window.ms = 30000 17:06:12 policy-apex-pdp | partitioner.adaptive.partitioning.enable = true 17:06:12 policy-apex-pdp | partitioner.availability.timeout.ms = 0 17:06:12 policy-apex-pdp | partitioner.class = null 17:06:12 policy-apex-pdp | partitioner.ignore.keys = false 17:06:12 policy-apex-pdp | receive.buffer.bytes = 32768 17:06:12 policy-apex-pdp | reconnect.backoff.max.ms = 1000 17:06:12 policy-apex-pdp | reconnect.backoff.ms = 50 17:06:12 policy-apex-pdp | request.timeout.ms = 30000 17:06:12 policy-apex-pdp | retries = 2147483647 17:06:12 policy-apex-pdp | retry.backoff.ms = 100 17:06:12 policy-apex-pdp | sasl.client.callback.handler.class = null 17:06:12 policy-apex-pdp | sasl.jaas.config = null 17:06:12 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 17:06:12 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 17:06:12 policy-apex-pdp | sasl.kerberos.service.name = null 17:06:12 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 17:06:12 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 17:06:12 policy-apex-pdp | sasl.login.callback.handler.class = null 17:06:12 policy-apex-pdp | sasl.login.class = null 17:06:12 policy-apex-pdp | sasl.login.connect.timeout.ms = null 17:06:12 policy-apex-pdp | sasl.login.read.timeout.ms = null 17:06:12 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 17:06:12 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 17:06:12 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 17:06:12 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 17:06:12 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 17:06:12 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 17:06:12 policy-apex-pdp | sasl.mechanism = GSSAPI 17:06:12 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 17:06:12 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 17:06:12 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 17:06:12 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 17:06:12 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 17:06:12 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 17:06:12 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 17:06:12 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 17:06:12 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 17:06:12 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 17:06:12 policy-apex-pdp | security.protocol = PLAINTEXT 17:06:12 policy-apex-pdp | security.providers = null 17:06:12 policy-apex-pdp | send.buffer.bytes = 131072 17:06:12 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 17:06:12 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 17:06:12 policy-apex-pdp | ssl.cipher.suites = null 17:06:12 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 17:06:12 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 17:06:12 policy-apex-pdp | ssl.engine.factory.class = null 17:06:12 policy-apex-pdp | ssl.key.password = null 17:06:12 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 17:06:12 policy-apex-pdp | ssl.keystore.certificate.chain = null 17:06:12 policy-apex-pdp | ssl.keystore.key = null 17:06:12 policy-apex-pdp | ssl.keystore.location = null 17:06:12 policy-apex-pdp | ssl.keystore.password = null 17:06:12 policy-apex-pdp | ssl.keystore.type = JKS 17:06:12 policy-apex-pdp | ssl.protocol = TLSv1.3 17:06:12 policy-apex-pdp | ssl.provider = null 17:06:12 policy-apex-pdp | ssl.secure.random.implementation = null 17:06:12 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 17:06:12 policy-apex-pdp | ssl.truststore.certificates = null 17:06:12 policy-apex-pdp | ssl.truststore.location = null 17:06:12 policy-apex-pdp | ssl.truststore.password = null 17:06:12 policy-apex-pdp | ssl.truststore.type = JKS 17:06:12 policy-apex-pdp | transaction.timeout.ms = 60000 17:06:12 policy-apex-pdp | transactional.id = null 17:06:12 policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 17:06:12 policy-apex-pdp | 17:06:12 policy-apex-pdp | [2024-10-31T17:04:07.505+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 17:06:12 policy-apex-pdp | [2024-10-31T17:04:07.525+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 17:06:12 policy-apex-pdp | [2024-10-31T17:04:07.525+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 17:06:12 policy-apex-pdp | [2024-10-31T17:04:07.525+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1730394247525 17:06:12 policy-apex-pdp | [2024-10-31T17:04:07.526+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=3166d225-e34e-4991-a92d-ce7a5ca681fb, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 17:06:12 policy-apex-pdp | [2024-10-31T17:04:07.526+00:00|INFO|ServiceManager|main] service manager starting set alive 17:06:12 policy-apex-pdp | [2024-10-31T17:04:07.526+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object 17:06:12 policy-apex-pdp | [2024-10-31T17:04:07.529+00:00|INFO|ServiceManager|main] service manager starting topic sinks 17:06:12 policy-apex-pdp | [2024-10-31T17:04:07.529+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher 17:06:12 policy-apex-pdp | [2024-10-31T17:04:07.531+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener 17:06:12 policy-apex-pdp | [2024-10-31T17:04:07.532+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher 17:06:12 policy-apex-pdp | [2024-10-31T17:04:07.532+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher 17:06:12 policy-apex-pdp | [2024-10-31T17:04:07.532+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=d237d326-42e3-4b36-a9eb-fb715dab21ef, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@60a2630a 17:06:12 policy-apex-pdp | [2024-10-31T17:04:07.532+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=d237d326-42e3-4b36-a9eb-fb715dab21ef, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted 17:06:12 policy-apex-pdp | [2024-10-31T17:04:07.533+00:00|INFO|ServiceManager|main] service manager starting Create REST server 17:06:12 policy-apex-pdp | [2024-10-31T17:04:07.547+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: 17:06:12 policy-apex-pdp | [] 17:06:12 policy-apex-pdp | [2024-10-31T17:04:07.549+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 17:06:12 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"9cfb11e8-8ef5-4f36-8db7-497b4ec9a0f5","timestampMs":1730394247532,"name":"apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19","pdpGroup":"defaultGroup"} 17:06:12 policy-apex-pdp | [2024-10-31T17:04:07.730+00:00|INFO|ServiceManager|main] service manager starting Rest Server 17:06:12 policy-apex-pdp | [2024-10-31T17:04:07.730+00:00|INFO|ServiceManager|main] service manager starting 17:06:12 policy-apex-pdp | [2024-10-31T17:04:07.730+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters 17:06:12 policy-apex-pdp | [2024-10-31T17:04:07.730+00:00|INFO|JettyServletServer|main] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@1ac85b0c{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3dd69f5a{/,null,STOPPED}, connector=RestServerParameters@63c5efee{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 17:06:12 policy-apex-pdp | [2024-10-31T17:04:07.741+00:00|INFO|ServiceManager|main] service manager started 17:06:12 policy-apex-pdp | [2024-10-31T17:04:07.741+00:00|INFO|ServiceManager|main] service manager started 17:06:12 policy-apex-pdp | [2024-10-31T17:04:07.741+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. 17:06:12 policy-apex-pdp | [2024-10-31T17:04:07.741+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@1ac85b0c{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3dd69f5a{/,null,STOPPED}, connector=RestServerParameters@63c5efee{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 17:06:12 policy-apex-pdp | [2024-10-31T17:04:07.851+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: TF51AfRARcuaw1457m-14A 17:06:12 policy-apex-pdp | [2024-10-31T17:04:07.851+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d237d326-42e3-4b36-a9eb-fb715dab21ef-2, groupId=d237d326-42e3-4b36-a9eb-fb715dab21ef] Cluster ID: TF51AfRARcuaw1457m-14A 17:06:12 policy-apex-pdp | [2024-10-31T17:04:07.852+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 17:06:12 policy-apex-pdp | [2024-10-31T17:04:07.853+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d237d326-42e3-4b36-a9eb-fb715dab21ef-2, groupId=d237d326-42e3-4b36-a9eb-fb715dab21ef] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 17:06:12 policy-apex-pdp | [2024-10-31T17:04:07.862+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d237d326-42e3-4b36-a9eb-fb715dab21ef-2, groupId=d237d326-42e3-4b36-a9eb-fb715dab21ef] (Re-)joining group 17:06:12 policy-apex-pdp | [2024-10-31T17:04:07.880+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d237d326-42e3-4b36-a9eb-fb715dab21ef-2, groupId=d237d326-42e3-4b36-a9eb-fb715dab21ef] Request joining group due to: need to re-join with the given member-id: consumer-d237d326-42e3-4b36-a9eb-fb715dab21ef-2-77c6b742-c550-4f47-b781-c7bb0e7760df 17:06:12 policy-apex-pdp | [2024-10-31T17:04:07.881+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d237d326-42e3-4b36-a9eb-fb715dab21ef-2, groupId=d237d326-42e3-4b36-a9eb-fb715dab21ef] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 17:06:12 policy-apex-pdp | [2024-10-31T17:04:07.881+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d237d326-42e3-4b36-a9eb-fb715dab21ef-2, groupId=d237d326-42e3-4b36-a9eb-fb715dab21ef] (Re-)joining group 17:06:12 policy-apex-pdp | [2024-10-31T17:04:08.394+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls 17:06:12 policy-apex-pdp | [2024-10-31T17:04:08.394+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls 17:06:12 policy-apex-pdp | [2024-10-31T17:04:10.885+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d237d326-42e3-4b36-a9eb-fb715dab21ef-2, groupId=d237d326-42e3-4b36-a9eb-fb715dab21ef] Successfully joined group with generation Generation{generationId=1, memberId='consumer-d237d326-42e3-4b36-a9eb-fb715dab21ef-2-77c6b742-c550-4f47-b781-c7bb0e7760df', protocol='range'} 17:06:12 policy-apex-pdp | [2024-10-31T17:04:10.899+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d237d326-42e3-4b36-a9eb-fb715dab21ef-2, groupId=d237d326-42e3-4b36-a9eb-fb715dab21ef] Finished assignment for group at generation 1: {consumer-d237d326-42e3-4b36-a9eb-fb715dab21ef-2-77c6b742-c550-4f47-b781-c7bb0e7760df=Assignment(partitions=[policy-pdp-pap-0])} 17:06:12 policy-apex-pdp | [2024-10-31T17:04:10.905+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d237d326-42e3-4b36-a9eb-fb715dab21ef-2, groupId=d237d326-42e3-4b36-a9eb-fb715dab21ef] Successfully synced group in generation Generation{generationId=1, memberId='consumer-d237d326-42e3-4b36-a9eb-fb715dab21ef-2-77c6b742-c550-4f47-b781-c7bb0e7760df', protocol='range'} 17:06:12 policy-apex-pdp | [2024-10-31T17:04:10.906+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d237d326-42e3-4b36-a9eb-fb715dab21ef-2, groupId=d237d326-42e3-4b36-a9eb-fb715dab21ef] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 17:06:12 policy-apex-pdp | [2024-10-31T17:04:10.908+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d237d326-42e3-4b36-a9eb-fb715dab21ef-2, groupId=d237d326-42e3-4b36-a9eb-fb715dab21ef] Adding newly assigned partitions: policy-pdp-pap-0 17:06:12 policy-apex-pdp | [2024-10-31T17:04:10.921+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d237d326-42e3-4b36-a9eb-fb715dab21ef-2, groupId=d237d326-42e3-4b36-a9eb-fb715dab21ef] Found no committed offset for partition policy-pdp-pap-0 17:06:12 policy-apex-pdp | [2024-10-31T17:04:10.931+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d237d326-42e3-4b36-a9eb-fb715dab21ef-2, groupId=d237d326-42e3-4b36-a9eb-fb715dab21ef] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 17:06:12 policy-apex-pdp | [2024-10-31T17:04:27.532+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 17:06:12 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"f90b7713-30c7-4855-a3de-7aa0db39c976","timestampMs":1730394267532,"name":"apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19","pdpGroup":"defaultGroup"} 17:06:12 policy-apex-pdp | [2024-10-31T17:04:27.559+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:06:12 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"f90b7713-30c7-4855-a3de-7aa0db39c976","timestampMs":1730394267532,"name":"apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19","pdpGroup":"defaultGroup"} 17:06:12 policy-apex-pdp | [2024-10-31T17:04:27.561+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 17:06:12 policy-apex-pdp | [2024-10-31T17:04:28.050+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:06:12 policy-apex-pdp | {"source":"pap-002a3449-2290-4722-aeec-12e93e1c8792","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"9ed7852d-83d4-4534-b2e7-e1ff9ed08deb","timestampMs":1730394267947,"name":"apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:06:12 policy-apex-pdp | [2024-10-31T17:04:28.065+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] 17:06:12 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"13796921-8e6a-493f-a2c4-b4667b53616d","timestampMs":1730394268065,"name":"apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19","pdpGroup":"defaultGroup"} 17:06:12 policy-apex-pdp | [2024-10-31T17:04:28.065+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher 17:06:12 policy-apex-pdp | [2024-10-31T17:04:28.066+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 17:06:12 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"9ed7852d-83d4-4534-b2e7-e1ff9ed08deb","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"8c646afb-62a4-4466-a133-5f1cd874b813","timestampMs":1730394268066,"name":"apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:06:12 policy-apex-pdp | [2024-10-31T17:04:28.084+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:06:12 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"13796921-8e6a-493f-a2c4-b4667b53616d","timestampMs":1730394268065,"name":"apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19","pdpGroup":"defaultGroup"} 17:06:12 policy-apex-pdp | [2024-10-31T17:04:28.084+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 17:06:12 policy-apex-pdp | [2024-10-31T17:04:28.090+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:06:12 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"9ed7852d-83d4-4534-b2e7-e1ff9ed08deb","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"8c646afb-62a4-4466-a133-5f1cd874b813","timestampMs":1730394268066,"name":"apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:06:12 policy-apex-pdp | [2024-10-31T17:04:28.090+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 17:06:12 policy-apex-pdp | [2024-10-31T17:04:28.418+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:06:12 policy-apex-pdp | {"source":"pap-002a3449-2290-4722-aeec-12e93e1c8792","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"9bc39ba9-5666-40d6-899e-758a2310f19c","timestampMs":1730394267948,"name":"apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:06:12 policy-apex-pdp | [2024-10-31T17:04:28.422+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 17:06:12 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"9bc39ba9-5666-40d6-899e-758a2310f19c","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"e948d2f0-494f-43e0-ba71-3069ac68216f","timestampMs":1730394268422,"name":"apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:06:12 policy-apex-pdp | [2024-10-31T17:04:28.434+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:06:12 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"9bc39ba9-5666-40d6-899e-758a2310f19c","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"e948d2f0-494f-43e0-ba71-3069ac68216f","timestampMs":1730394268422,"name":"apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:06:12 policy-apex-pdp | [2024-10-31T17:04:28.436+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 17:06:12 policy-apex-pdp | [2024-10-31T17:04:28.467+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:06:12 policy-apex-pdp | {"source":"pap-002a3449-2290-4722-aeec-12e93e1c8792","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"ce65d777-a36c-40c6-8ed7-a771182706b4","timestampMs":1730394268435,"name":"apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:06:12 policy-apex-pdp | [2024-10-31T17:04:28.469+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 17:06:12 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"ce65d777-a36c-40c6-8ed7-a771182706b4","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"0be9e106-8763-4ee6-9d38-7c1f659301ad","timestampMs":1730394268469,"name":"apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:06:12 policy-apex-pdp | [2024-10-31T17:04:28.480+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:06:12 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"ce65d777-a36c-40c6-8ed7-a771182706b4","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"0be9e106-8763-4ee6-9d38-7c1f659301ad","timestampMs":1730394268469,"name":"apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:06:12 policy-apex-pdp | [2024-10-31T17:04:28.480+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 17:06:12 policy-apex-pdp | [2024-10-31T17:04:56.167+00:00|INFO|RequestLog|qtp739264372-33] 172.17.0.4 - policyadmin [31/Oct/2024:17:04:56 +0000] "GET /metrics HTTP/1.1" 200 10638 "-" "Prometheus/2.55.0" 17:06:12 policy-apex-pdp | [2024-10-31T17:05:56.087+00:00|INFO|RequestLog|qtp739264372-28] 172.17.0.4 - policyadmin [31/Oct/2024:17:05:56 +0000] "GET /metrics HTTP/1.1" 200 10643 "-" "Prometheus/2.55.0" 17:06:12 =================================== 17:06:12 ======== Logs from api ======== 17:06:12 policy-api | Waiting for mariadb port 3306... 17:06:12 policy-api | mariadb (172.17.0.2:3306) open 17:06:12 policy-api | Waiting for policy-db-migrator port 6824... 17:06:12 policy-api | policy-db-migrator (172.17.0.7:6824) open 17:06:12 policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml 17:06:12 policy-api | 17:06:12 policy-api | . ____ _ __ _ _ 17:06:12 policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 17:06:12 policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 17:06:12 policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 17:06:12 policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / 17:06:12 policy-api | =========|_|==============|___/=/_/_/_/ 17:06:12 policy-api | :: Spring Boot :: (v3.1.10) 17:06:12 policy-api | 17:06:12 policy-api | [2024-10-31T17:03:43.565+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final 17:06:12 policy-api | [2024-10-31T17:03:43.623+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.11 with PID 22 (/app/api.jar started by policy in /opt/app/policy/api/bin) 17:06:12 policy-api | [2024-10-31T17:03:43.624+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" 17:06:12 policy-api | [2024-10-31T17:03:45.497+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 17:06:12 policy-api | [2024-10-31T17:03:45.585+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 77 ms. Found 6 JPA repository interfaces. 17:06:12 policy-api | [2024-10-31T17:03:46.012+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 17:06:12 policy-api | [2024-10-31T17:03:46.012+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 17:06:12 policy-api | [2024-10-31T17:03:46.578+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 17:06:12 policy-api | [2024-10-31T17:03:46.588+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 17:06:12 policy-api | [2024-10-31T17:03:46.590+00:00|INFO|StandardService|main] Starting service [Tomcat] 17:06:12 policy-api | [2024-10-31T17:03:46.590+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.19] 17:06:12 policy-api | [2024-10-31T17:03:46.674+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext 17:06:12 policy-api | [2024-10-31T17:03:46.675+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2987 ms 17:06:12 policy-api | [2024-10-31T17:03:47.116+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 17:06:12 policy-api | [2024-10-31T17:03:47.197+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.2.Final 17:06:12 policy-api | [2024-10-31T17:03:47.247+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 17:06:12 policy-api | [2024-10-31T17:03:47.540+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 17:06:12 policy-api | [2024-10-31T17:03:47.581+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 17:06:12 policy-api | [2024-10-31T17:03:47.696+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@26844abb 17:06:12 policy-api | [2024-10-31T17:03:47.698+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 17:06:12 policy-api | [2024-10-31T17:03:49.678+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 17:06:12 policy-api | [2024-10-31T17:03:49.681+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 17:06:12 policy-api | [2024-10-31T17:03:50.725+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml 17:06:12 policy-api | [2024-10-31T17:03:51.576+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] 17:06:12 policy-api | [2024-10-31T17:03:52.657+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 17:06:12 policy-api | [2024-10-31T17:03:52.930+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@5977f3d6, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@4aca25e8, org.springframework.security.web.context.SecurityContextHolderFilter@5c5e301f, org.springframework.security.web.header.HeaderWriterFilter@7991e022, org.springframework.security.web.authentication.logout.LogoutFilter@5b61d156, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@1ae2b0d0, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@74355746, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@31f5829e, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@13a34a70, org.springframework.security.web.access.ExceptionTranslationFilter@10e5c13c, org.springframework.security.web.access.intercept.AuthorizationFilter@7ac47e87] 17:06:12 policy-api | [2024-10-31T17:03:53.765+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 17:06:12 policy-api | [2024-10-31T17:03:53.858+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 17:06:12 policy-api | [2024-10-31T17:03:53.877+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' 17:06:12 policy-api | [2024-10-31T17:03:53.895+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 10.991 seconds (process running for 11.585) 17:06:12 policy-api | [2024-10-31T17:04:39.922+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-3] Initializing Spring DispatcherServlet 'dispatcherServlet' 17:06:12 policy-api | [2024-10-31T17:04:39.923+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Initializing Servlet 'dispatcherServlet' 17:06:12 policy-api | [2024-10-31T17:04:39.924+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Completed initialization in 1 ms 17:06:12 policy-api | [2024-10-31T17:04:40.549+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-2] ***** OrderedServiceImpl implementers: 17:06:12 policy-api | [] 17:06:12 =================================== 17:06:12 ======== Logs from csit-tests ======== 17:06:12 policy-csit | Invoking the robot tests from: pap-test.robot pap-slas.robot 17:06:12 policy-csit | Run Robot test 17:06:12 policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies 17:06:12 policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates 17:06:12 policy-csit | -v POLICY_API_IP:policy-api:6969 17:06:12 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 17:06:12 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 17:06:12 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 17:06:12 policy-csit | -v APEX_IP:policy-apex-pdp:6969 17:06:12 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 17:06:12 policy-csit | -v KAFKA_IP:kafka:9092 17:06:12 policy-csit | -v PROMETHEUS_IP:prometheus:9090 17:06:12 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 17:06:12 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 17:06:12 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 17:06:12 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 17:06:12 policy-csit | -v TEMP_FOLDER:/tmp/distribution 17:06:12 policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 17:06:12 policy-csit | -v CLAMP_K8S_TEST: 17:06:12 policy-csit | Starting Robot test suites ... 17:06:12 policy-csit | ============================================================================== 17:06:12 policy-csit | Pap-Test & Pap-Slas 17:06:12 policy-csit | ============================================================================== 17:06:12 policy-csit | Pap-Test & Pap-Slas.Pap-Test 17:06:12 policy-csit | ============================================================================== 17:06:12 policy-csit | LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | 17:06:12 policy-csit | ------------------------------------------------------------------------------ 17:06:12 policy-csit | LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | 17:06:12 policy-csit | ------------------------------------------------------------------------------ 17:06:12 policy-csit | LoadNodeTemplates :: Create node templates in database using speci... | PASS | 17:06:12 policy-csit | ------------------------------------------------------------------------------ 17:06:12 policy-csit | Healthcheck :: Verify policy pap health check | PASS | 17:06:12 policy-csit | ------------------------------------------------------------------------------ 17:06:12 policy-csit | Consolidated Healthcheck :: Verify policy consolidated health check | PASS | 17:06:12 policy-csit | ------------------------------------------------------------------------------ 17:06:12 policy-csit | Metrics :: Verify policy pap is exporting prometheus metrics | PASS | 17:06:12 policy-csit | ------------------------------------------------------------------------------ 17:06:12 policy-csit | AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | 17:06:12 policy-csit | ------------------------------------------------------------------------------ 17:06:12 policy-csit | QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | 17:06:12 policy-csit | ------------------------------------------------------------------------------ 17:06:12 policy-csit | ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | 17:06:12 policy-csit | ------------------------------------------------------------------------------ 17:06:12 policy-csit | QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | 17:06:12 policy-csit | ------------------------------------------------------------------------------ 17:06:12 policy-csit | DeployPdpGroups :: Deploy policies in PdpGroups | PASS | 17:06:12 policy-csit | ------------------------------------------------------------------------------ 17:06:12 policy-csit | QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | 17:06:12 policy-csit | ------------------------------------------------------------------------------ 17:06:12 policy-csit | QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | 17:06:12 policy-csit | ------------------------------------------------------------------------------ 17:06:12 policy-csit | QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | 17:06:12 policy-csit | ------------------------------------------------------------------------------ 17:06:12 policy-csit | UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | 17:06:12 policy-csit | ------------------------------------------------------------------------------ 17:06:12 policy-csit | UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | 17:06:12 policy-csit | ------------------------------------------------------------------------------ 17:06:12 policy-csit | QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | 17:06:12 policy-csit | ------------------------------------------------------------------------------ 17:06:12 policy-csit | QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | PASS | 17:06:12 policy-csit | ------------------------------------------------------------------------------ 17:06:12 policy-csit | QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | 17:06:12 policy-csit | ------------------------------------------------------------------------------ 17:06:12 policy-csit | DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | 17:06:12 policy-csit | ------------------------------------------------------------------------------ 17:06:12 policy-csit | DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | 17:06:12 policy-csit | ------------------------------------------------------------------------------ 17:06:12 policy-csit | QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | 17:06:12 policy-csit | ------------------------------------------------------------------------------ 17:06:12 policy-csit | Pap-Test & Pap-Slas.Pap-Test | PASS | 17:06:12 policy-csit | 22 tests, 22 passed, 0 failed 17:06:12 policy-csit | ============================================================================== 17:06:12 policy-csit | Pap-Test & Pap-Slas.Pap-Slas 17:06:12 policy-csit | ============================================================================== 17:06:12 policy-csit | WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | 17:06:12 policy-csit | ------------------------------------------------------------------------------ 17:06:12 policy-csit | ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | 17:06:12 policy-csit | ------------------------------------------------------------------------------ 17:06:12 policy-csit | ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | 17:06:12 policy-csit | ------------------------------------------------------------------------------ 17:06:12 policy-csit | ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | 17:06:12 policy-csit | ------------------------------------------------------------------------------ 17:06:12 policy-csit | ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | 17:06:12 policy-csit | ------------------------------------------------------------------------------ 17:06:12 policy-csit | ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | 17:06:12 policy-csit | ------------------------------------------------------------------------------ 17:06:12 policy-csit | ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | 17:06:12 policy-csit | ------------------------------------------------------------------------------ 17:06:12 policy-csit | ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | 17:06:12 policy-csit | ------------------------------------------------------------------------------ 17:06:12 policy-csit | Pap-Test & Pap-Slas.Pap-Slas | PASS | 17:06:12 policy-csit | 8 tests, 8 passed, 0 failed 17:06:12 policy-csit | ============================================================================== 17:06:12 policy-csit | Pap-Test & Pap-Slas | PASS | 17:06:12 policy-csit | 30 tests, 30 passed, 0 failed 17:06:12 policy-csit | ============================================================================== 17:06:12 policy-csit | Output: /tmp/results/output.xml 17:06:12 policy-csit | Log: /tmp/results/log.html 17:06:12 policy-csit | Report: /tmp/results/report.html 17:06:12 policy-csit | RESULT: 0 17:06:12 =================================== 17:06:12 ======== Logs from policy-db-migrator ======== 17:06:12 policy-db-migrator | Waiting for mariadb port 3306... 17:06:12 policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused 17:06:12 policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused 17:06:12 policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused 17:06:12 policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused 17:06:12 policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused 17:06:12 policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused 17:06:12 policy-db-migrator | Connection to mariadb (172.17.0.2) 3306 port [tcp/mysql] succeeded! 17:06:12 policy-db-migrator | 321 blocks 17:06:12 policy-db-migrator | Preparing upgrade release version: 0800 17:06:12 policy-db-migrator | Preparing upgrade release version: 0900 17:06:12 policy-db-migrator | Preparing upgrade release version: 1000 17:06:12 policy-db-migrator | Preparing upgrade release version: 1100 17:06:12 policy-db-migrator | Preparing upgrade release version: 1200 17:06:12 policy-db-migrator | Preparing upgrade release version: 1300 17:06:12 policy-db-migrator | Done 17:06:12 policy-db-migrator | name version 17:06:12 policy-db-migrator | policyadmin 0 17:06:12 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 17:06:12 policy-db-migrator | upgrade: 0 -> 1300 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0450-pdpgroup.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0460-pdppolicystatus.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0470-pdp.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0480-pdpstatistics.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0500-pdpsubgroup.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0570-toscadatatype.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0580-toscadatatypes.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0600-toscanodetemplate.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0610-toscanodetemplates.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0630-toscanodetype.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0640-toscanodetypes.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0660-toscaparameter.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0670-toscapolicies.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0690-toscapolicy.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0700-toscapolicytype.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0710-toscapolicytypes.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0730-toscaproperty.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0770-toscarequirement.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0780-toscarequirements.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0820-toscatrigger.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0100-pdp.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0130-pdpstatistics.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0150-pdpstatistics.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | UPDATE jpapdpstatistics_enginestats a 17:06:12 policy-db-migrator | JOIN pdpstatistics b 17:06:12 policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp 17:06:12 policy-db-migrator | SET a.id = b.id 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0210-sequence.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0220-sequence.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0120-toscatrigger.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | DROP TABLE IF EXISTS toscatrigger 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0140-toscaparameter.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | DROP TABLE IF EXISTS toscaparameter 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0150-toscaproperty.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | DROP TABLE IF EXISTS toscaproperty 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0100-upgrade.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | select 'upgrade to 1100 completed' as msg 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | msg 17:06:12 policy-db-migrator | upgrade to 1100 completed 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0120-audit_sequence.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0130-statistics_sequence.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | TRUNCATE TABLE sequence 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0100-pdpstatistics.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | DROP TABLE pdpstatistics 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | > upgrade 0120-statistics_sequence.sql 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | DROP TABLE statistics_sequence 17:06:12 policy-db-migrator | -------------- 17:06:12 policy-db-migrator | 17:06:12 policy-db-migrator | policyadmin: OK: upgrade (1300) 17:06:12 policy-db-migrator | name version 17:06:12 policy-db-migrator | policyadmin 1300 17:06:12 policy-db-migrator | ID script operation from_version to_version tag success atTime 17:06:12 policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:35 17:06:12 policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:35 17:06:12 policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:35 17:06:12 policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:35 17:06:12 policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:36 17:06:12 policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:36 17:06:12 policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:36 17:06:12 policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:36 17:06:12 policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:36 17:06:12 policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:36 17:06:12 policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:36 17:06:12 policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:36 17:06:12 policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:36 17:06:12 policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:36 17:06:12 policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:36 17:06:12 policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:36 17:06:12 policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:36 17:06:12 policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:36 17:06:12 policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:36 17:06:12 policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:36 17:06:12 policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:36 17:06:12 policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:36 17:06:12 policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:36 17:06:12 policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:36 17:06:12 policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:36 17:06:12 policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:36 17:06:12 policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:36 17:06:12 policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:37 17:06:12 policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:37 17:06:12 policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:37 17:06:12 policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:37 17:06:12 policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:37 17:06:12 policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:37 17:06:12 policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:37 17:06:12 policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:37 17:06:12 policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:37 17:06:12 policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:37 17:06:12 policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:37 17:06:12 policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:37 17:06:12 policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:37 17:06:12 policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:37 17:06:12 policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:37 17:06:12 policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:37 17:06:12 policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:37 17:06:12 policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:37 17:06:12 policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:37 17:06:12 policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:37 17:06:12 policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:37 17:06:12 policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:37 17:06:12 policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:37 17:06:12 policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:37 17:06:12 policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:38 17:06:12 policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:38 17:06:12 policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:38 17:06:12 policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:38 17:06:12 policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:38 17:06:12 policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:38 17:06:12 policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:38 17:06:12 policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:38 17:06:12 policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:38 17:06:12 policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:38 17:06:12 policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:38 17:06:12 policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:38 17:06:12 policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:38 17:06:12 policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:38 17:06:12 policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:38 17:06:12 policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:38 17:06:12 policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:38 17:06:12 policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:38 17:06:12 policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:38 17:06:12 policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:38 17:06:12 policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:38 17:06:12 policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:38 17:06:12 policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:38 17:06:12 policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:38 17:06:12 policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:38 17:06:12 policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:38 17:06:12 policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:39 17:06:12 policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:39 17:06:12 policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:39 17:06:12 policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:39 17:06:12 policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:39 17:06:12 policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:39 17:06:12 policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:39 17:06:12 policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:39 17:06:12 policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:39 17:06:12 policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:39 17:06:12 policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:39 17:06:12 policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:39 17:06:12 policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:39 17:06:12 policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:39 17:06:12 policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:39 17:06:12 policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:39 17:06:12 policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:39 17:06:12 policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:40 17:06:12 policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 3110241703350800u 1 2024-10-31 17:03:40 17:06:12 policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 3110241703350900u 1 2024-10-31 17:03:40 17:06:12 policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 3110241703350900u 1 2024-10-31 17:03:40 17:06:12 policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 3110241703350900u 1 2024-10-31 17:03:40 17:06:12 policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 3110241703350900u 1 2024-10-31 17:03:40 17:06:12 policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 3110241703350900u 1 2024-10-31 17:03:40 17:06:12 policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 3110241703350900u 1 2024-10-31 17:03:40 17:06:12 policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 3110241703350900u 1 2024-10-31 17:03:40 17:06:12 policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 3110241703350900u 1 2024-10-31 17:03:40 17:06:12 policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 3110241703350900u 1 2024-10-31 17:03:40 17:06:12 policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 3110241703350900u 1 2024-10-31 17:03:40 17:06:12 policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 3110241703350900u 1 2024-10-31 17:03:40 17:06:12 policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 3110241703350900u 1 2024-10-31 17:03:40 17:06:12 policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 3110241703350900u 1 2024-10-31 17:03:40 17:06:12 policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 3110241703351000u 1 2024-10-31 17:03:40 17:06:12 policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 3110241703351000u 1 2024-10-31 17:03:40 17:06:12 policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 3110241703351000u 1 2024-10-31 17:03:40 17:06:12 policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 3110241703351000u 1 2024-10-31 17:03:40 17:06:12 policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 3110241703351000u 1 2024-10-31 17:03:40 17:06:12 policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 3110241703351000u 1 2024-10-31 17:03:40 17:06:12 policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 3110241703351000u 1 2024-10-31 17:03:41 17:06:12 policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 3110241703351000u 1 2024-10-31 17:03:41 17:06:12 policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 3110241703351000u 1 2024-10-31 17:03:41 17:06:12 policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 3110241703351100u 1 2024-10-31 17:03:41 17:06:12 policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 3110241703351200u 1 2024-10-31 17:03:41 17:06:12 policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 3110241703351200u 1 2024-10-31 17:03:41 17:06:12 policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 3110241703351200u 1 2024-10-31 17:03:41 17:06:12 policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 3110241703351200u 1 2024-10-31 17:03:41 17:06:12 policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 3110241703351300u 1 2024-10-31 17:03:41 17:06:12 policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 3110241703351300u 1 2024-10-31 17:03:41 17:06:12 policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 3110241703351300u 1 2024-10-31 17:03:41 17:06:12 policy-db-migrator | policyadmin: OK @ 1300 17:06:12 =================================== 17:06:12 ======== Logs from pap ======== 17:06:12 policy-pap | Waiting for mariadb port 3306... 17:06:12 policy-pap | mariadb (172.17.0.2:3306) open 17:06:12 policy-pap | Waiting for kafka port 9092... 17:06:12 policy-pap | kafka (172.17.0.9:9092) open 17:06:12 policy-pap | Waiting for api port 6969... 17:06:12 policy-pap | api (172.17.0.8:6969) open 17:06:12 policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml 17:06:12 policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json 17:06:12 policy-pap | 17:06:12 policy-pap | . ____ _ __ _ _ 17:06:12 policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 17:06:12 policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 17:06:12 policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 17:06:12 policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / 17:06:12 policy-pap | =========|_|==============|___/=/_/_/_/ 17:06:12 policy-pap | :: Spring Boot :: (v3.1.10) 17:06:12 policy-pap | 17:06:12 policy-pap | [2024-10-31T17:03:56.108+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final 17:06:12 policy-pap | [2024-10-31T17:03:56.185+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.11 with PID 34 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) 17:06:12 policy-pap | [2024-10-31T17:03:56.186+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" 17:06:12 policy-pap | [2024-10-31T17:03:58.211+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 17:06:12 policy-pap | [2024-10-31T17:03:58.304+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 83 ms. Found 7 JPA repository interfaces. 17:06:12 policy-pap | [2024-10-31T17:03:58.825+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 17:06:12 policy-pap | [2024-10-31T17:03:58.826+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 17:06:12 policy-pap | [2024-10-31T17:03:59.601+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 17:06:12 policy-pap | [2024-10-31T17:03:59.615+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 17:06:12 policy-pap | [2024-10-31T17:03:59.617+00:00|INFO|StandardService|main] Starting service [Tomcat] 17:06:12 policy-pap | [2024-10-31T17:03:59.617+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.19] 17:06:12 policy-pap | [2024-10-31T17:03:59.726+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext 17:06:12 policy-pap | [2024-10-31T17:03:59.726+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3456 ms 17:06:12 policy-pap | [2024-10-31T17:04:00.196+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 17:06:12 policy-pap | [2024-10-31T17:04:00.265+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 5.6.15.Final 17:06:12 policy-pap | [2024-10-31T17:04:00.668+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 17:06:12 policy-pap | [2024-10-31T17:04:00.791+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@46185a1b 17:06:12 policy-pap | [2024-10-31T17:04:00.793+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 17:06:12 policy-pap | [2024-10-31T17:04:00.830+00:00|INFO|Dialect|main] HHH000400: Using dialect: org.hibernate.dialect.MariaDB106Dialect 17:06:12 policy-pap | [2024-10-31T17:04:02.413+00:00|INFO|JtaPlatformInitiator|main] HHH000490: Using JtaPlatform implementation: [org.hibernate.engine.transaction.jta.platform.internal.NoJtaPlatform] 17:06:12 policy-pap | [2024-10-31T17:04:02.423+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 17:06:12 policy-pap | [2024-10-31T17:04:02.959+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository 17:06:12 policy-pap | [2024-10-31T17:04:03.348+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository 17:06:12 policy-pap | [2024-10-31T17:04:03.452+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository 17:06:12 policy-pap | [2024-10-31T17:04:03.742+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 17:06:12 policy-pap | allow.auto.create.topics = true 17:06:12 policy-pap | auto.commit.interval.ms = 5000 17:06:12 policy-pap | auto.include.jmx.reporter = true 17:06:12 policy-pap | auto.offset.reset = latest 17:06:12 policy-pap | bootstrap.servers = [kafka:9092] 17:06:12 policy-pap | check.crcs = true 17:06:12 policy-pap | client.dns.lookup = use_all_dns_ips 17:06:12 policy-pap | client.id = consumer-c5240df4-4957-4aee-bcf1-b1765fb43c2f-1 17:06:12 policy-pap | client.rack = 17:06:12 policy-pap | connections.max.idle.ms = 540000 17:06:12 policy-pap | default.api.timeout.ms = 60000 17:06:12 policy-pap | enable.auto.commit = true 17:06:12 policy-pap | exclude.internal.topics = true 17:06:12 policy-pap | fetch.max.bytes = 52428800 17:06:12 policy-pap | fetch.max.wait.ms = 500 17:06:12 policy-pap | fetch.min.bytes = 1 17:06:12 policy-pap | group.id = c5240df4-4957-4aee-bcf1-b1765fb43c2f 17:06:12 policy-pap | group.instance.id = null 17:06:12 policy-pap | heartbeat.interval.ms = 3000 17:06:12 policy-pap | interceptor.classes = [] 17:06:12 policy-pap | internal.leave.group.on.close = true 17:06:12 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 17:06:12 policy-pap | isolation.level = read_uncommitted 17:06:12 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:06:12 policy-pap | max.partition.fetch.bytes = 1048576 17:06:12 policy-pap | max.poll.interval.ms = 300000 17:06:12 policy-pap | max.poll.records = 500 17:06:12 policy-pap | metadata.max.age.ms = 300000 17:06:12 policy-pap | metric.reporters = [] 17:06:12 policy-pap | metrics.num.samples = 2 17:06:12 policy-pap | metrics.recording.level = INFO 17:06:12 policy-pap | metrics.sample.window.ms = 30000 17:06:12 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 17:06:12 policy-pap | receive.buffer.bytes = 65536 17:06:12 policy-pap | reconnect.backoff.max.ms = 1000 17:06:12 policy-pap | reconnect.backoff.ms = 50 17:06:12 policy-pap | request.timeout.ms = 30000 17:06:12 policy-pap | retry.backoff.ms = 100 17:06:12 policy-pap | sasl.client.callback.handler.class = null 17:06:12 policy-pap | sasl.jaas.config = null 17:06:12 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 17:06:12 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 17:06:12 policy-pap | sasl.kerberos.service.name = null 17:06:12 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 17:06:12 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 17:06:12 policy-pap | sasl.login.callback.handler.class = null 17:06:12 policy-pap | sasl.login.class = null 17:06:12 policy-pap | sasl.login.connect.timeout.ms = null 17:06:12 policy-pap | sasl.login.read.timeout.ms = null 17:06:12 policy-pap | sasl.login.refresh.buffer.seconds = 300 17:06:12 policy-pap | sasl.login.refresh.min.period.seconds = 60 17:06:12 policy-pap | sasl.login.refresh.window.factor = 0.8 17:06:12 policy-pap | sasl.login.refresh.window.jitter = 0.05 17:06:12 policy-pap | sasl.login.retry.backoff.max.ms = 10000 17:06:12 policy-pap | sasl.login.retry.backoff.ms = 100 17:06:12 policy-pap | sasl.mechanism = GSSAPI 17:06:12 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 17:06:12 policy-pap | sasl.oauthbearer.expected.audience = null 17:06:12 policy-pap | sasl.oauthbearer.expected.issuer = null 17:06:12 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 17:06:12 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 17:06:12 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 17:06:12 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 17:06:12 policy-pap | sasl.oauthbearer.scope.claim.name = scope 17:06:12 policy-pap | sasl.oauthbearer.sub.claim.name = sub 17:06:12 policy-pap | sasl.oauthbearer.token.endpoint.url = null 17:06:12 policy-pap | security.protocol = PLAINTEXT 17:06:12 policy-pap | security.providers = null 17:06:12 policy-pap | send.buffer.bytes = 131072 17:06:12 policy-pap | session.timeout.ms = 45000 17:06:12 policy-pap | socket.connection.setup.timeout.max.ms = 30000 17:06:12 policy-pap | socket.connection.setup.timeout.ms = 10000 17:06:12 policy-pap | ssl.cipher.suites = null 17:06:12 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 17:06:12 policy-pap | ssl.endpoint.identification.algorithm = https 17:06:12 policy-pap | ssl.engine.factory.class = null 17:06:12 policy-pap | ssl.key.password = null 17:06:12 policy-pap | ssl.keymanager.algorithm = SunX509 17:06:12 policy-pap | ssl.keystore.certificate.chain = null 17:06:12 policy-pap | ssl.keystore.key = null 17:06:12 policy-pap | ssl.keystore.location = null 17:06:12 policy-pap | ssl.keystore.password = null 17:06:12 policy-pap | ssl.keystore.type = JKS 17:06:12 policy-pap | ssl.protocol = TLSv1.3 17:06:12 policy-pap | ssl.provider = null 17:06:12 policy-pap | ssl.secure.random.implementation = null 17:06:12 policy-pap | ssl.trustmanager.algorithm = PKIX 17:06:12 policy-pap | ssl.truststore.certificates = null 17:06:12 policy-pap | ssl.truststore.location = null 17:06:12 policy-pap | ssl.truststore.password = null 17:06:12 policy-pap | ssl.truststore.type = JKS 17:06:12 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:06:12 policy-pap | 17:06:12 policy-pap | [2024-10-31T17:04:03.896+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 17:06:12 policy-pap | [2024-10-31T17:04:03.896+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 17:06:12 policy-pap | [2024-10-31T17:04:03.896+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1730394243894 17:06:12 policy-pap | [2024-10-31T17:04:03.898+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-c5240df4-4957-4aee-bcf1-b1765fb43c2f-1, groupId=c5240df4-4957-4aee-bcf1-b1765fb43c2f] Subscribed to topic(s): policy-pdp-pap 17:06:12 policy-pap | [2024-10-31T17:04:03.899+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 17:06:12 policy-pap | allow.auto.create.topics = true 17:06:12 policy-pap | auto.commit.interval.ms = 5000 17:06:12 policy-pap | auto.include.jmx.reporter = true 17:06:12 policy-pap | auto.offset.reset = latest 17:06:12 policy-pap | bootstrap.servers = [kafka:9092] 17:06:12 policy-pap | check.crcs = true 17:06:12 policy-pap | client.dns.lookup = use_all_dns_ips 17:06:12 policy-pap | client.id = consumer-policy-pap-2 17:06:12 policy-pap | client.rack = 17:06:12 policy-pap | connections.max.idle.ms = 540000 17:06:12 policy-pap | default.api.timeout.ms = 60000 17:06:12 policy-pap | enable.auto.commit = true 17:06:12 policy-pap | exclude.internal.topics = true 17:06:12 policy-pap | fetch.max.bytes = 52428800 17:06:12 policy-pap | fetch.max.wait.ms = 500 17:06:12 policy-pap | fetch.min.bytes = 1 17:06:12 policy-pap | group.id = policy-pap 17:06:12 policy-pap | group.instance.id = null 17:06:12 policy-pap | heartbeat.interval.ms = 3000 17:06:12 policy-pap | interceptor.classes = [] 17:06:12 policy-pap | internal.leave.group.on.close = true 17:06:12 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 17:06:12 policy-pap | isolation.level = read_uncommitted 17:06:12 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:06:12 policy-pap | max.partition.fetch.bytes = 1048576 17:06:12 policy-pap | max.poll.interval.ms = 300000 17:06:12 policy-pap | max.poll.records = 500 17:06:12 policy-pap | metadata.max.age.ms = 300000 17:06:12 policy-pap | metric.reporters = [] 17:06:12 policy-pap | metrics.num.samples = 2 17:06:12 policy-pap | metrics.recording.level = INFO 17:06:12 policy-pap | metrics.sample.window.ms = 30000 17:06:12 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 17:06:12 policy-pap | receive.buffer.bytes = 65536 17:06:12 policy-pap | reconnect.backoff.max.ms = 1000 17:06:12 policy-pap | reconnect.backoff.ms = 50 17:06:12 policy-pap | request.timeout.ms = 30000 17:06:12 policy-pap | retry.backoff.ms = 100 17:06:12 policy-pap | sasl.client.callback.handler.class = null 17:06:12 policy-pap | sasl.jaas.config = null 17:06:12 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 17:06:12 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 17:06:12 policy-pap | sasl.kerberos.service.name = null 17:06:12 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 17:06:12 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 17:06:12 policy-pap | sasl.login.callback.handler.class = null 17:06:12 policy-pap | sasl.login.class = null 17:06:12 policy-pap | sasl.login.connect.timeout.ms = null 17:06:12 policy-pap | sasl.login.read.timeout.ms = null 17:06:12 policy-pap | sasl.login.refresh.buffer.seconds = 300 17:06:12 policy-pap | sasl.login.refresh.min.period.seconds = 60 17:06:12 policy-pap | sasl.login.refresh.window.factor = 0.8 17:06:12 policy-pap | sasl.login.refresh.window.jitter = 0.05 17:06:12 policy-pap | sasl.login.retry.backoff.max.ms = 10000 17:06:12 policy-pap | sasl.login.retry.backoff.ms = 100 17:06:12 policy-pap | sasl.mechanism = GSSAPI 17:06:12 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 17:06:12 policy-pap | sasl.oauthbearer.expected.audience = null 17:06:12 policy-pap | sasl.oauthbearer.expected.issuer = null 17:06:12 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 17:06:12 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 17:06:12 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 17:06:12 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 17:06:12 policy-pap | sasl.oauthbearer.scope.claim.name = scope 17:06:12 policy-pap | sasl.oauthbearer.sub.claim.name = sub 17:06:12 policy-pap | sasl.oauthbearer.token.endpoint.url = null 17:06:12 policy-pap | security.protocol = PLAINTEXT 17:06:12 policy-pap | security.providers = null 17:06:12 policy-pap | send.buffer.bytes = 131072 17:06:12 policy-pap | session.timeout.ms = 45000 17:06:12 policy-pap | socket.connection.setup.timeout.max.ms = 30000 17:06:12 policy-pap | socket.connection.setup.timeout.ms = 10000 17:06:12 policy-pap | ssl.cipher.suites = null 17:06:12 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 17:06:12 policy-pap | ssl.endpoint.identification.algorithm = https 17:06:12 policy-pap | ssl.engine.factory.class = null 17:06:12 policy-pap | ssl.key.password = null 17:06:12 policy-pap | ssl.keymanager.algorithm = SunX509 17:06:12 policy-pap | ssl.keystore.certificate.chain = null 17:06:12 policy-pap | ssl.keystore.key = null 17:06:12 policy-pap | ssl.keystore.location = null 17:06:12 policy-pap | ssl.keystore.password = null 17:06:12 policy-pap | ssl.keystore.type = JKS 17:06:12 policy-pap | ssl.protocol = TLSv1.3 17:06:12 policy-pap | ssl.provider = null 17:06:12 policy-pap | ssl.secure.random.implementation = null 17:06:12 policy-pap | ssl.trustmanager.algorithm = PKIX 17:06:12 policy-pap | ssl.truststore.certificates = null 17:06:12 policy-pap | ssl.truststore.location = null 17:06:12 policy-pap | ssl.truststore.password = null 17:06:12 policy-pap | ssl.truststore.type = JKS 17:06:12 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:06:12 policy-pap | 17:06:12 policy-pap | [2024-10-31T17:04:03.904+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 17:06:12 policy-pap | [2024-10-31T17:04:03.904+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 17:06:12 policy-pap | [2024-10-31T17:04:03.904+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1730394243904 17:06:12 policy-pap | [2024-10-31T17:04:03.904+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 17:06:12 policy-pap | [2024-10-31T17:04:04.222+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json 17:06:12 policy-pap | [2024-10-31T17:04:04.397+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 17:06:12 policy-pap | [2024-10-31T17:04:04.619+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@12f3fcd, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@6dc2e473, org.springframework.security.web.context.SecurityContextHolderFilter@2e91cf69, org.springframework.security.web.header.HeaderWriterFilter@2c70a1de, org.springframework.security.web.authentication.logout.LogoutFilter@6b52a40, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@36ab69d9, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@37665305, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@2c224096, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@12ebfb2d, org.springframework.security.web.access.ExceptionTranslationFilter@71eafb64, org.springframework.security.web.access.intercept.AuthorizationFilter@4177d422] 17:06:12 policy-pap | [2024-10-31T17:04:05.418+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 17:06:12 policy-pap | [2024-10-31T17:04:05.522+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 17:06:12 policy-pap | [2024-10-31T17:04:05.545+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' 17:06:12 policy-pap | [2024-10-31T17:04:05.563+00:00|INFO|ServiceManager|main] Policy PAP starting 17:06:12 policy-pap | [2024-10-31T17:04:05.563+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry 17:06:12 policy-pap | [2024-10-31T17:04:05.563+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters 17:06:12 policy-pap | [2024-10-31T17:04:05.564+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener 17:06:12 policy-pap | [2024-10-31T17:04:05.564+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher 17:06:12 policy-pap | [2024-10-31T17:04:05.565+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher 17:06:12 policy-pap | [2024-10-31T17:04:05.565+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher 17:06:12 policy-pap | [2024-10-31T17:04:05.566+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=c5240df4-4957-4aee-bcf1-b1765fb43c2f, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@78b44fcb 17:06:12 policy-pap | [2024-10-31T17:04:05.578+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=c5240df4-4957-4aee-bcf1-b1765fb43c2f, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 17:06:12 policy-pap | [2024-10-31T17:04:05.579+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 17:06:12 policy-pap | allow.auto.create.topics = true 17:06:12 policy-pap | auto.commit.interval.ms = 5000 17:06:12 policy-pap | auto.include.jmx.reporter = true 17:06:12 policy-pap | auto.offset.reset = latest 17:06:12 policy-pap | bootstrap.servers = [kafka:9092] 17:06:12 policy-pap | check.crcs = true 17:06:12 policy-pap | client.dns.lookup = use_all_dns_ips 17:06:12 policy-pap | client.id = consumer-c5240df4-4957-4aee-bcf1-b1765fb43c2f-3 17:06:12 policy-pap | client.rack = 17:06:12 policy-pap | connections.max.idle.ms = 540000 17:06:12 policy-pap | default.api.timeout.ms = 60000 17:06:12 policy-pap | enable.auto.commit = true 17:06:12 policy-pap | exclude.internal.topics = true 17:06:12 policy-pap | fetch.max.bytes = 52428800 17:06:12 policy-pap | fetch.max.wait.ms = 500 17:06:12 policy-pap | fetch.min.bytes = 1 17:06:12 policy-pap | group.id = c5240df4-4957-4aee-bcf1-b1765fb43c2f 17:06:12 policy-pap | group.instance.id = null 17:06:12 policy-pap | heartbeat.interval.ms = 3000 17:06:12 policy-pap | interceptor.classes = [] 17:06:12 policy-pap | internal.leave.group.on.close = true 17:06:12 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 17:06:12 policy-pap | isolation.level = read_uncommitted 17:06:12 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:06:12 policy-pap | max.partition.fetch.bytes = 1048576 17:06:12 policy-pap | max.poll.interval.ms = 300000 17:06:12 policy-pap | max.poll.records = 500 17:06:12 policy-pap | metadata.max.age.ms = 300000 17:06:12 policy-pap | metric.reporters = [] 17:06:12 policy-pap | metrics.num.samples = 2 17:06:12 policy-pap | metrics.recording.level = INFO 17:06:12 policy-pap | metrics.sample.window.ms = 30000 17:06:12 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 17:06:12 policy-pap | receive.buffer.bytes = 65536 17:06:12 policy-pap | reconnect.backoff.max.ms = 1000 17:06:12 policy-pap | reconnect.backoff.ms = 50 17:06:12 policy-pap | request.timeout.ms = 30000 17:06:12 policy-pap | retry.backoff.ms = 100 17:06:12 policy-pap | sasl.client.callback.handler.class = null 17:06:12 policy-pap | sasl.jaas.config = null 17:06:12 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 17:06:12 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 17:06:12 policy-pap | sasl.kerberos.service.name = null 17:06:12 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 17:06:12 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 17:06:12 policy-pap | sasl.login.callback.handler.class = null 17:06:12 policy-pap | sasl.login.class = null 17:06:12 policy-pap | sasl.login.connect.timeout.ms = null 17:06:12 policy-pap | sasl.login.read.timeout.ms = null 17:06:12 policy-pap | sasl.login.refresh.buffer.seconds = 300 17:06:12 policy-pap | sasl.login.refresh.min.period.seconds = 60 17:06:12 policy-pap | sasl.login.refresh.window.factor = 0.8 17:06:12 policy-pap | sasl.login.refresh.window.jitter = 0.05 17:06:12 policy-pap | sasl.login.retry.backoff.max.ms = 10000 17:06:12 policy-pap | sasl.login.retry.backoff.ms = 100 17:06:12 policy-pap | sasl.mechanism = GSSAPI 17:06:12 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 17:06:12 policy-pap | sasl.oauthbearer.expected.audience = null 17:06:12 policy-pap | sasl.oauthbearer.expected.issuer = null 17:06:12 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 17:06:12 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 17:06:12 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 17:06:12 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 17:06:12 policy-pap | sasl.oauthbearer.scope.claim.name = scope 17:06:12 policy-pap | sasl.oauthbearer.sub.claim.name = sub 17:06:12 policy-pap | sasl.oauthbearer.token.endpoint.url = null 17:06:12 policy-pap | security.protocol = PLAINTEXT 17:06:12 policy-pap | security.providers = null 17:06:12 policy-pap | send.buffer.bytes = 131072 17:06:12 policy-pap | session.timeout.ms = 45000 17:06:12 policy-pap | socket.connection.setup.timeout.max.ms = 30000 17:06:12 policy-pap | socket.connection.setup.timeout.ms = 10000 17:06:12 policy-pap | ssl.cipher.suites = null 17:06:12 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 17:06:12 policy-pap | ssl.endpoint.identification.algorithm = https 17:06:12 policy-pap | ssl.engine.factory.class = null 17:06:12 policy-pap | ssl.key.password = null 17:06:12 policy-pap | ssl.keymanager.algorithm = SunX509 17:06:12 policy-pap | ssl.keystore.certificate.chain = null 17:06:12 policy-pap | ssl.keystore.key = null 17:06:12 policy-pap | ssl.keystore.location = null 17:06:12 policy-pap | ssl.keystore.password = null 17:06:12 policy-pap | ssl.keystore.type = JKS 17:06:12 policy-pap | ssl.protocol = TLSv1.3 17:06:12 policy-pap | ssl.provider = null 17:06:12 policy-pap | ssl.secure.random.implementation = null 17:06:12 policy-pap | ssl.trustmanager.algorithm = PKIX 17:06:12 policy-pap | ssl.truststore.certificates = null 17:06:12 policy-pap | ssl.truststore.location = null 17:06:12 policy-pap | ssl.truststore.password = null 17:06:12 policy-pap | ssl.truststore.type = JKS 17:06:12 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:06:12 policy-pap | 17:06:12 policy-pap | [2024-10-31T17:04:05.585+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 17:06:12 policy-pap | [2024-10-31T17:04:05.585+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 17:06:12 policy-pap | [2024-10-31T17:04:05.585+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1730394245585 17:06:12 policy-pap | [2024-10-31T17:04:05.585+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-c5240df4-4957-4aee-bcf1-b1765fb43c2f-3, groupId=c5240df4-4957-4aee-bcf1-b1765fb43c2f] Subscribed to topic(s): policy-pdp-pap 17:06:12 policy-pap | [2024-10-31T17:04:05.586+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher 17:06:12 policy-pap | [2024-10-31T17:04:05.586+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=71ce3477-509a-4a82-b8ee-cca604cb5302, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@7a364e1c 17:06:12 policy-pap | [2024-10-31T17:04:05.586+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=71ce3477-509a-4a82-b8ee-cca604cb5302, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 17:06:12 policy-pap | [2024-10-31T17:04:05.586+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 17:06:12 policy-pap | allow.auto.create.topics = true 17:06:12 policy-pap | auto.commit.interval.ms = 5000 17:06:12 policy-pap | auto.include.jmx.reporter = true 17:06:12 policy-pap | auto.offset.reset = latest 17:06:12 policy-pap | bootstrap.servers = [kafka:9092] 17:06:12 policy-pap | check.crcs = true 17:06:12 policy-pap | client.dns.lookup = use_all_dns_ips 17:06:12 policy-pap | client.id = consumer-policy-pap-4 17:06:12 policy-pap | client.rack = 17:06:12 policy-pap | connections.max.idle.ms = 540000 17:06:12 policy-pap | default.api.timeout.ms = 60000 17:06:12 policy-pap | enable.auto.commit = true 17:06:12 policy-pap | exclude.internal.topics = true 17:06:12 policy-pap | fetch.max.bytes = 52428800 17:06:12 policy-pap | fetch.max.wait.ms = 500 17:06:12 policy-pap | fetch.min.bytes = 1 17:06:12 policy-pap | group.id = policy-pap 17:06:12 policy-pap | group.instance.id = null 17:06:12 policy-pap | heartbeat.interval.ms = 3000 17:06:12 policy-pap | interceptor.classes = [] 17:06:12 policy-pap | internal.leave.group.on.close = true 17:06:12 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 17:06:12 policy-pap | isolation.level = read_uncommitted 17:06:12 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:06:12 policy-pap | max.partition.fetch.bytes = 1048576 17:06:12 policy-pap | max.poll.interval.ms = 300000 17:06:12 policy-pap | max.poll.records = 500 17:06:12 policy-pap | metadata.max.age.ms = 300000 17:06:12 policy-pap | metric.reporters = [] 17:06:12 policy-pap | metrics.num.samples = 2 17:06:12 policy-pap | metrics.recording.level = INFO 17:06:12 policy-pap | metrics.sample.window.ms = 30000 17:06:12 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 17:06:12 policy-pap | receive.buffer.bytes = 65536 17:06:12 policy-pap | reconnect.backoff.max.ms = 1000 17:06:12 policy-pap | reconnect.backoff.ms = 50 17:06:12 policy-pap | request.timeout.ms = 30000 17:06:12 policy-pap | retry.backoff.ms = 100 17:06:12 policy-pap | sasl.client.callback.handler.class = null 17:06:12 policy-pap | sasl.jaas.config = null 17:06:12 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 17:06:12 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 17:06:12 policy-pap | sasl.kerberos.service.name = null 17:06:12 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 17:06:12 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 17:06:12 policy-pap | sasl.login.callback.handler.class = null 17:06:12 policy-pap | sasl.login.class = null 17:06:12 policy-pap | sasl.login.connect.timeout.ms = null 17:06:12 policy-pap | sasl.login.read.timeout.ms = null 17:06:12 policy-pap | sasl.login.refresh.buffer.seconds = 300 17:06:12 policy-pap | sasl.login.refresh.min.period.seconds = 60 17:06:12 policy-pap | sasl.login.refresh.window.factor = 0.8 17:06:12 policy-pap | sasl.login.refresh.window.jitter = 0.05 17:06:12 policy-pap | sasl.login.retry.backoff.max.ms = 10000 17:06:12 policy-pap | sasl.login.retry.backoff.ms = 100 17:06:12 policy-pap | sasl.mechanism = GSSAPI 17:06:12 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 17:06:12 policy-pap | sasl.oauthbearer.expected.audience = null 17:06:12 policy-pap | sasl.oauthbearer.expected.issuer = null 17:06:12 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 17:06:12 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 17:06:12 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 17:06:12 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 17:06:12 policy-pap | sasl.oauthbearer.scope.claim.name = scope 17:06:12 policy-pap | sasl.oauthbearer.sub.claim.name = sub 17:06:12 policy-pap | sasl.oauthbearer.token.endpoint.url = null 17:06:12 policy-pap | security.protocol = PLAINTEXT 17:06:12 policy-pap | security.providers = null 17:06:12 policy-pap | send.buffer.bytes = 131072 17:06:12 policy-pap | session.timeout.ms = 45000 17:06:12 policy-pap | socket.connection.setup.timeout.max.ms = 30000 17:06:12 policy-pap | socket.connection.setup.timeout.ms = 10000 17:06:12 policy-pap | ssl.cipher.suites = null 17:06:12 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 17:06:12 policy-pap | ssl.endpoint.identification.algorithm = https 17:06:12 policy-pap | ssl.engine.factory.class = null 17:06:12 policy-pap | ssl.key.password = null 17:06:12 policy-pap | ssl.keymanager.algorithm = SunX509 17:06:12 policy-pap | ssl.keystore.certificate.chain = null 17:06:12 policy-pap | ssl.keystore.key = null 17:06:12 policy-pap | ssl.keystore.location = null 17:06:12 policy-pap | ssl.keystore.password = null 17:06:12 policy-pap | ssl.keystore.type = JKS 17:06:12 policy-pap | ssl.protocol = TLSv1.3 17:06:12 policy-pap | ssl.provider = null 17:06:12 policy-pap | ssl.secure.random.implementation = null 17:06:12 policy-pap | ssl.trustmanager.algorithm = PKIX 17:06:12 policy-pap | ssl.truststore.certificates = null 17:06:12 policy-pap | ssl.truststore.location = null 17:06:12 policy-pap | ssl.truststore.password = null 17:06:12 policy-pap | ssl.truststore.type = JKS 17:06:12 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:06:12 policy-pap | 17:06:12 policy-pap | [2024-10-31T17:04:05.591+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 17:06:12 policy-pap | [2024-10-31T17:04:05.591+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 17:06:12 policy-pap | [2024-10-31T17:04:05.591+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1730394245591 17:06:12 policy-pap | [2024-10-31T17:04:05.591+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 17:06:12 policy-pap | [2024-10-31T17:04:05.591+00:00|INFO|ServiceManager|main] Policy PAP starting topics 17:06:12 policy-pap | [2024-10-31T17:04:05.591+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=71ce3477-509a-4a82-b8ee-cca604cb5302, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 17:06:12 policy-pap | [2024-10-31T17:04:05.592+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=c5240df4-4957-4aee-bcf1-b1765fb43c2f, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 17:06:12 policy-pap | [2024-10-31T17:04:05.592+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=cf266153-7ff3-4910-8654-a0b3e9bb868f, alive=false, publisher=null]]: starting 17:06:12 policy-pap | [2024-10-31T17:04:05.608+00:00|INFO|ProducerConfig|main] ProducerConfig values: 17:06:12 policy-pap | acks = -1 17:06:12 policy-pap | auto.include.jmx.reporter = true 17:06:12 policy-pap | batch.size = 16384 17:06:12 policy-pap | bootstrap.servers = [kafka:9092] 17:06:12 policy-pap | buffer.memory = 33554432 17:06:12 policy-pap | client.dns.lookup = use_all_dns_ips 17:06:12 policy-pap | client.id = producer-1 17:06:12 policy-pap | compression.type = none 17:06:12 policy-pap | connections.max.idle.ms = 540000 17:06:12 policy-pap | delivery.timeout.ms = 120000 17:06:12 policy-pap | enable.idempotence = true 17:06:12 policy-pap | interceptor.classes = [] 17:06:12 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 17:06:12 policy-pap | linger.ms = 0 17:06:12 policy-pap | max.block.ms = 60000 17:06:12 policy-pap | max.in.flight.requests.per.connection = 5 17:06:12 policy-pap | max.request.size = 1048576 17:06:12 policy-pap | metadata.max.age.ms = 300000 17:06:12 policy-pap | metadata.max.idle.ms = 300000 17:06:12 policy-pap | metric.reporters = [] 17:06:12 policy-pap | metrics.num.samples = 2 17:06:12 policy-pap | metrics.recording.level = INFO 17:06:12 policy-pap | metrics.sample.window.ms = 30000 17:06:12 policy-pap | partitioner.adaptive.partitioning.enable = true 17:06:12 policy-pap | partitioner.availability.timeout.ms = 0 17:06:12 policy-pap | partitioner.class = null 17:06:12 policy-pap | partitioner.ignore.keys = false 17:06:12 policy-pap | receive.buffer.bytes = 32768 17:06:12 policy-pap | reconnect.backoff.max.ms = 1000 17:06:12 policy-pap | reconnect.backoff.ms = 50 17:06:12 policy-pap | request.timeout.ms = 30000 17:06:12 policy-pap | retries = 2147483647 17:06:12 policy-pap | retry.backoff.ms = 100 17:06:12 policy-pap | sasl.client.callback.handler.class = null 17:06:12 policy-pap | sasl.jaas.config = null 17:06:12 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 17:06:12 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 17:06:12 policy-pap | sasl.kerberos.service.name = null 17:06:12 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 17:06:12 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 17:06:12 policy-pap | sasl.login.callback.handler.class = null 17:06:12 policy-pap | sasl.login.class = null 17:06:12 policy-pap | sasl.login.connect.timeout.ms = null 17:06:12 policy-pap | sasl.login.read.timeout.ms = null 17:06:12 policy-pap | sasl.login.refresh.buffer.seconds = 300 17:06:12 policy-pap | sasl.login.refresh.min.period.seconds = 60 17:06:12 policy-pap | sasl.login.refresh.window.factor = 0.8 17:06:12 policy-pap | sasl.login.refresh.window.jitter = 0.05 17:06:12 policy-pap | sasl.login.retry.backoff.max.ms = 10000 17:06:12 policy-pap | sasl.login.retry.backoff.ms = 100 17:06:12 policy-pap | sasl.mechanism = GSSAPI 17:06:12 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 17:06:12 policy-pap | sasl.oauthbearer.expected.audience = null 17:06:12 policy-pap | sasl.oauthbearer.expected.issuer = null 17:06:12 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 17:06:12 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 17:06:12 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 17:06:12 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 17:06:12 policy-pap | sasl.oauthbearer.scope.claim.name = scope 17:06:12 policy-pap | sasl.oauthbearer.sub.claim.name = sub 17:06:12 policy-pap | sasl.oauthbearer.token.endpoint.url = null 17:06:12 policy-pap | security.protocol = PLAINTEXT 17:06:12 policy-pap | security.providers = null 17:06:12 policy-pap | send.buffer.bytes = 131072 17:06:12 policy-pap | socket.connection.setup.timeout.max.ms = 30000 17:06:12 policy-pap | socket.connection.setup.timeout.ms = 10000 17:06:12 policy-pap | ssl.cipher.suites = null 17:06:12 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 17:06:12 policy-pap | ssl.endpoint.identification.algorithm = https 17:06:12 policy-pap | ssl.engine.factory.class = null 17:06:12 policy-pap | ssl.key.password = null 17:06:12 policy-pap | ssl.keymanager.algorithm = SunX509 17:06:12 policy-pap | ssl.keystore.certificate.chain = null 17:06:12 policy-pap | ssl.keystore.key = null 17:06:12 policy-pap | ssl.keystore.location = null 17:06:12 policy-pap | ssl.keystore.password = null 17:06:12 policy-pap | ssl.keystore.type = JKS 17:06:12 policy-pap | ssl.protocol = TLSv1.3 17:06:12 policy-pap | ssl.provider = null 17:06:12 policy-pap | ssl.secure.random.implementation = null 17:06:12 policy-pap | ssl.trustmanager.algorithm = PKIX 17:06:12 policy-pap | ssl.truststore.certificates = null 17:06:12 policy-pap | ssl.truststore.location = null 17:06:12 policy-pap | ssl.truststore.password = null 17:06:12 policy-pap | ssl.truststore.type = JKS 17:06:12 policy-pap | transaction.timeout.ms = 60000 17:06:12 policy-pap | transactional.id = null 17:06:12 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 17:06:12 policy-pap | 17:06:12 policy-pap | [2024-10-31T17:04:05.619+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 17:06:12 policy-pap | [2024-10-31T17:04:05.636+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 17:06:12 policy-pap | [2024-10-31T17:04:05.636+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 17:06:12 policy-pap | [2024-10-31T17:04:05.636+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1730394245636 17:06:12 policy-pap | [2024-10-31T17:04:05.637+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=cf266153-7ff3-4910-8654-a0b3e9bb868f, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 17:06:12 policy-pap | [2024-10-31T17:04:05.637+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=95894d53-a921-453b-89c5-ffed01863619, alive=false, publisher=null]]: starting 17:06:12 policy-pap | [2024-10-31T17:04:05.637+00:00|INFO|ProducerConfig|main] ProducerConfig values: 17:06:12 policy-pap | acks = -1 17:06:12 policy-pap | auto.include.jmx.reporter = true 17:06:12 policy-pap | batch.size = 16384 17:06:12 policy-pap | bootstrap.servers = [kafka:9092] 17:06:12 policy-pap | buffer.memory = 33554432 17:06:12 policy-pap | client.dns.lookup = use_all_dns_ips 17:06:12 policy-pap | client.id = producer-2 17:06:12 policy-pap | compression.type = none 17:06:12 policy-pap | connections.max.idle.ms = 540000 17:06:12 policy-pap | delivery.timeout.ms = 120000 17:06:12 policy-pap | enable.idempotence = true 17:06:12 policy-pap | interceptor.classes = [] 17:06:12 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 17:06:12 policy-pap | linger.ms = 0 17:06:12 policy-pap | max.block.ms = 60000 17:06:12 policy-pap | max.in.flight.requests.per.connection = 5 17:06:12 policy-pap | max.request.size = 1048576 17:06:12 policy-pap | metadata.max.age.ms = 300000 17:06:12 policy-pap | metadata.max.idle.ms = 300000 17:06:12 policy-pap | metric.reporters = [] 17:06:12 policy-pap | metrics.num.samples = 2 17:06:12 policy-pap | metrics.recording.level = INFO 17:06:12 policy-pap | metrics.sample.window.ms = 30000 17:06:12 policy-pap | partitioner.adaptive.partitioning.enable = true 17:06:12 policy-pap | partitioner.availability.timeout.ms = 0 17:06:12 policy-pap | partitioner.class = null 17:06:12 policy-pap | partitioner.ignore.keys = false 17:06:12 policy-pap | receive.buffer.bytes = 32768 17:06:12 policy-pap | reconnect.backoff.max.ms = 1000 17:06:12 policy-pap | reconnect.backoff.ms = 50 17:06:12 policy-pap | request.timeout.ms = 30000 17:06:12 policy-pap | retries = 2147483647 17:06:12 policy-pap | retry.backoff.ms = 100 17:06:12 policy-pap | sasl.client.callback.handler.class = null 17:06:12 policy-pap | sasl.jaas.config = null 17:06:12 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 17:06:12 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 17:06:12 policy-pap | sasl.kerberos.service.name = null 17:06:12 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 17:06:12 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 17:06:12 policy-pap | sasl.login.callback.handler.class = null 17:06:12 policy-pap | sasl.login.class = null 17:06:12 policy-pap | sasl.login.connect.timeout.ms = null 17:06:12 policy-pap | sasl.login.read.timeout.ms = null 17:06:12 policy-pap | sasl.login.refresh.buffer.seconds = 300 17:06:12 policy-pap | sasl.login.refresh.min.period.seconds = 60 17:06:12 policy-pap | sasl.login.refresh.window.factor = 0.8 17:06:12 policy-pap | sasl.login.refresh.window.jitter = 0.05 17:06:12 policy-pap | sasl.login.retry.backoff.max.ms = 10000 17:06:12 policy-pap | sasl.login.retry.backoff.ms = 100 17:06:12 policy-pap | sasl.mechanism = GSSAPI 17:06:12 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 17:06:12 policy-pap | sasl.oauthbearer.expected.audience = null 17:06:12 policy-pap | sasl.oauthbearer.expected.issuer = null 17:06:12 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 17:06:12 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 17:06:12 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 17:06:12 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 17:06:12 policy-pap | sasl.oauthbearer.scope.claim.name = scope 17:06:12 policy-pap | sasl.oauthbearer.sub.claim.name = sub 17:06:12 policy-pap | sasl.oauthbearer.token.endpoint.url = null 17:06:12 policy-pap | security.protocol = PLAINTEXT 17:06:12 policy-pap | security.providers = null 17:06:12 policy-pap | send.buffer.bytes = 131072 17:06:12 policy-pap | socket.connection.setup.timeout.max.ms = 30000 17:06:12 policy-pap | socket.connection.setup.timeout.ms = 10000 17:06:12 policy-pap | ssl.cipher.suites = null 17:06:12 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 17:06:12 policy-pap | ssl.endpoint.identification.algorithm = https 17:06:12 policy-pap | ssl.engine.factory.class = null 17:06:12 policy-pap | ssl.key.password = null 17:06:12 policy-pap | ssl.keymanager.algorithm = SunX509 17:06:12 policy-pap | ssl.keystore.certificate.chain = null 17:06:12 policy-pap | ssl.keystore.key = null 17:06:12 policy-pap | ssl.keystore.location = null 17:06:12 policy-pap | ssl.keystore.password = null 17:06:12 policy-pap | ssl.keystore.type = JKS 17:06:12 policy-pap | ssl.protocol = TLSv1.3 17:06:12 policy-pap | ssl.provider = null 17:06:12 policy-pap | ssl.secure.random.implementation = null 17:06:12 policy-pap | ssl.trustmanager.algorithm = PKIX 17:06:12 policy-pap | ssl.truststore.certificates = null 17:06:12 policy-pap | ssl.truststore.location = null 17:06:12 policy-pap | ssl.truststore.password = null 17:06:12 policy-pap | ssl.truststore.type = JKS 17:06:12 policy-pap | transaction.timeout.ms = 60000 17:06:12 policy-pap | transactional.id = null 17:06:12 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 17:06:12 policy-pap | 17:06:12 policy-pap | [2024-10-31T17:04:05.638+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. 17:06:12 policy-pap | [2024-10-31T17:04:05.641+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 17:06:12 policy-pap | [2024-10-31T17:04:05.641+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 17:06:12 policy-pap | [2024-10-31T17:04:05.641+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1730394245641 17:06:12 policy-pap | [2024-10-31T17:04:05.641+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=95894d53-a921-453b-89c5-ffed01863619, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 17:06:12 policy-pap | [2024-10-31T17:04:05.642+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator 17:06:12 policy-pap | [2024-10-31T17:04:05.642+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher 17:06:12 policy-pap | [2024-10-31T17:04:05.644+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher 17:06:12 policy-pap | [2024-10-31T17:04:05.644+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers 17:06:12 policy-pap | [2024-10-31T17:04:05.646+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers 17:06:12 policy-pap | [2024-10-31T17:04:05.647+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock 17:06:12 policy-pap | [2024-10-31T17:04:05.648+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests 17:06:12 policy-pap | [2024-10-31T17:04:05.648+00:00|INFO|TimerManager|Thread-9] timer manager update started 17:06:12 policy-pap | [2024-10-31T17:04:05.648+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer 17:06:12 policy-pap | [2024-10-31T17:04:05.649+00:00|INFO|TimerManager|Thread-10] timer manager state-change started 17:06:12 policy-pap | [2024-10-31T17:04:05.654+00:00|INFO|ServiceManager|main] Policy PAP started 17:06:12 policy-pap | [2024-10-31T17:04:05.655+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 10.267 seconds (process running for 10.895) 17:06:12 policy-pap | [2024-10-31T17:04:06.088+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 17:06:12 policy-pap | [2024-10-31T17:04:06.089+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: TF51AfRARcuaw1457m-14A 17:06:12 policy-pap | [2024-10-31T17:04:06.090+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: TF51AfRARcuaw1457m-14A 17:06:12 policy-pap | [2024-10-31T17:04:06.093+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: TF51AfRARcuaw1457m-14A 17:06:12 policy-pap | [2024-10-31T17:04:06.138+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c5240df4-4957-4aee-bcf1-b1765fb43c2f-3, groupId=c5240df4-4957-4aee-bcf1-b1765fb43c2f] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 17:06:12 policy-pap | [2024-10-31T17:04:06.139+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c5240df4-4957-4aee-bcf1-b1765fb43c2f-3, groupId=c5240df4-4957-4aee-bcf1-b1765fb43c2f] Cluster ID: TF51AfRARcuaw1457m-14A 17:06:12 policy-pap | [2024-10-31T17:04:06.205+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 0 with epoch 0 17:06:12 policy-pap | [2024-10-31T17:04:06.207+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 1 with epoch 0 17:06:12 policy-pap | [2024-10-31T17:04:06.214+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 17:06:12 policy-pap | [2024-10-31T17:04:06.251+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c5240df4-4957-4aee-bcf1-b1765fb43c2f-3, groupId=c5240df4-4957-4aee-bcf1-b1765fb43c2f] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 17:06:12 policy-pap | [2024-10-31T17:04:06.346+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 17:06:12 policy-pap | [2024-10-31T17:04:06.364+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c5240df4-4957-4aee-bcf1-b1765fb43c2f-3, groupId=c5240df4-4957-4aee-bcf1-b1765fb43c2f] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 17:06:12 policy-pap | [2024-10-31T17:04:07.079+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 17:06:12 policy-pap | [2024-10-31T17:04:07.087+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 17:06:12 policy-pap | [2024-10-31T17:04:07.107+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c5240df4-4957-4aee-bcf1-b1765fb43c2f-3, groupId=c5240df4-4957-4aee-bcf1-b1765fb43c2f] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 17:06:12 policy-pap | [2024-10-31T17:04:07.109+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c5240df4-4957-4aee-bcf1-b1765fb43c2f-3, groupId=c5240df4-4957-4aee-bcf1-b1765fb43c2f] (Re-)joining group 17:06:12 policy-pap | [2024-10-31T17:04:07.118+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c5240df4-4957-4aee-bcf1-b1765fb43c2f-3, groupId=c5240df4-4957-4aee-bcf1-b1765fb43c2f] Request joining group due to: need to re-join with the given member-id: consumer-c5240df4-4957-4aee-bcf1-b1765fb43c2f-3-cd775aa6-61a0-4c5c-a748-f96b87fb4e5f 17:06:12 policy-pap | [2024-10-31T17:04:07.119+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c5240df4-4957-4aee-bcf1-b1765fb43c2f-3, groupId=c5240df4-4957-4aee-bcf1-b1765fb43c2f] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 17:06:12 policy-pap | [2024-10-31T17:04:07.119+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-fad83ed3-0469-4b79-889f-bc62d6959120 17:06:12 policy-pap | [2024-10-31T17:04:07.119+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c5240df4-4957-4aee-bcf1-b1765fb43c2f-3, groupId=c5240df4-4957-4aee-bcf1-b1765fb43c2f] (Re-)joining group 17:06:12 policy-pap | [2024-10-31T17:04:07.119+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 17:06:12 policy-pap | [2024-10-31T17:04:07.119+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 17:06:12 policy-pap | [2024-10-31T17:04:10.136+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c5240df4-4957-4aee-bcf1-b1765fb43c2f-3, groupId=c5240df4-4957-4aee-bcf1-b1765fb43c2f] Successfully joined group with generation Generation{generationId=1, memberId='consumer-c5240df4-4957-4aee-bcf1-b1765fb43c2f-3-cd775aa6-61a0-4c5c-a748-f96b87fb4e5f', protocol='range'} 17:06:12 policy-pap | [2024-10-31T17:04:10.139+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-fad83ed3-0469-4b79-889f-bc62d6959120', protocol='range'} 17:06:12 policy-pap | [2024-10-31T17:04:10.182+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c5240df4-4957-4aee-bcf1-b1765fb43c2f-3, groupId=c5240df4-4957-4aee-bcf1-b1765fb43c2f] Finished assignment for group at generation 1: {consumer-c5240df4-4957-4aee-bcf1-b1765fb43c2f-3-cd775aa6-61a0-4c5c-a748-f96b87fb4e5f=Assignment(partitions=[policy-pdp-pap-0])} 17:06:12 policy-pap | [2024-10-31T17:04:10.182+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-fad83ed3-0469-4b79-889f-bc62d6959120=Assignment(partitions=[policy-pdp-pap-0])} 17:06:12 policy-pap | [2024-10-31T17:04:10.213+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-fad83ed3-0469-4b79-889f-bc62d6959120', protocol='range'} 17:06:12 policy-pap | [2024-10-31T17:04:10.214+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 17:06:12 policy-pap | [2024-10-31T17:04:10.216+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c5240df4-4957-4aee-bcf1-b1765fb43c2f-3, groupId=c5240df4-4957-4aee-bcf1-b1765fb43c2f] Successfully synced group in generation Generation{generationId=1, memberId='consumer-c5240df4-4957-4aee-bcf1-b1765fb43c2f-3-cd775aa6-61a0-4c5c-a748-f96b87fb4e5f', protocol='range'} 17:06:12 policy-pap | [2024-10-31T17:04:10.216+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c5240df4-4957-4aee-bcf1-b1765fb43c2f-3, groupId=c5240df4-4957-4aee-bcf1-b1765fb43c2f] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 17:06:12 policy-pap | [2024-10-31T17:04:10.218+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 17:06:12 policy-pap | [2024-10-31T17:04:10.218+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c5240df4-4957-4aee-bcf1-b1765fb43c2f-3, groupId=c5240df4-4957-4aee-bcf1-b1765fb43c2f] Adding newly assigned partitions: policy-pdp-pap-0 17:06:12 policy-pap | [2024-10-31T17:04:10.238+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c5240df4-4957-4aee-bcf1-b1765fb43c2f-3, groupId=c5240df4-4957-4aee-bcf1-b1765fb43c2f] Found no committed offset for partition policy-pdp-pap-0 17:06:12 policy-pap | [2024-10-31T17:04:10.238+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 17:06:12 policy-pap | [2024-10-31T17:04:10.258+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 17:06:12 policy-pap | [2024-10-31T17:04:10.258+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c5240df4-4957-4aee-bcf1-b1765fb43c2f-3, groupId=c5240df4-4957-4aee-bcf1-b1765fb43c2f] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 17:06:12 policy-pap | [2024-10-31T17:04:27.567+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-pdp-pap] ***** OrderedServiceImpl implementers: 17:06:12 policy-pap | [] 17:06:12 policy-pap | [2024-10-31T17:04:27.568+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:06:12 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"f90b7713-30c7-4855-a3de-7aa0db39c976","timestampMs":1730394267532,"name":"apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19","pdpGroup":"defaultGroup"} 17:06:12 policy-pap | [2024-10-31T17:04:27.568+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 17:06:12 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"f90b7713-30c7-4855-a3de-7aa0db39c976","timestampMs":1730394267532,"name":"apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19","pdpGroup":"defaultGroup"} 17:06:12 policy-pap | [2024-10-31T17:04:27.575+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 17:06:12 policy-pap | [2024-10-31T17:04:27.974+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19 PdpUpdate starting 17:06:12 policy-pap | [2024-10-31T17:04:27.975+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19 PdpUpdate starting listener 17:06:12 policy-pap | [2024-10-31T17:04:27.975+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19 PdpUpdate starting timer 17:06:12 policy-pap | [2024-10-31T17:04:27.975+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=9ed7852d-83d4-4534-b2e7-e1ff9ed08deb, expireMs=1730394297975] 17:06:12 policy-pap | [2024-10-31T17:04:27.977+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19 PdpUpdate starting enqueue 17:06:12 policy-pap | [2024-10-31T17:04:27.977+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19 PdpUpdate started 17:06:12 policy-pap | [2024-10-31T17:04:27.977+00:00|INFO|TimerManager|Thread-9] update timer waiting 29998ms Timer [name=9ed7852d-83d4-4534-b2e7-e1ff9ed08deb, expireMs=1730394297975] 17:06:12 policy-pap | [2024-10-31T17:04:27.983+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 17:06:12 policy-pap | {"source":"pap-002a3449-2290-4722-aeec-12e93e1c8792","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"9ed7852d-83d4-4534-b2e7-e1ff9ed08deb","timestampMs":1730394267947,"name":"apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:06:12 policy-pap | [2024-10-31T17:04:28.040+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 17:06:12 policy-pap | {"source":"pap-002a3449-2290-4722-aeec-12e93e1c8792","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"9ed7852d-83d4-4534-b2e7-e1ff9ed08deb","timestampMs":1730394267947,"name":"apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:06:12 policy-pap | [2024-10-31T17:04:28.050+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 17:06:12 policy-pap | [2024-10-31T17:04:28.059+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:06:12 policy-pap | {"source":"pap-002a3449-2290-4722-aeec-12e93e1c8792","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"9ed7852d-83d4-4534-b2e7-e1ff9ed08deb","timestampMs":1730394267947,"name":"apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:06:12 policy-pap | [2024-10-31T17:04:28.059+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 17:06:12 policy-pap | [2024-10-31T17:04:28.080+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 17:06:12 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"13796921-8e6a-493f-a2c4-b4667b53616d","timestampMs":1730394268065,"name":"apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19","pdpGroup":"defaultGroup"} 17:06:12 policy-pap | [2024-10-31T17:04:28.118+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:06:12 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"13796921-8e6a-493f-a2c4-b4667b53616d","timestampMs":1730394268065,"name":"apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19","pdpGroup":"defaultGroup"} 17:06:12 policy-pap | [2024-10-31T17:04:28.118+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 17:06:12 policy-pap | [2024-10-31T17:04:28.119+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:06:12 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"9ed7852d-83d4-4534-b2e7-e1ff9ed08deb","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"8c646afb-62a4-4466-a133-5f1cd874b813","timestampMs":1730394268066,"name":"apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:06:12 policy-pap | [2024-10-31T17:04:28.328+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19 PdpUpdate stopping 17:06:12 policy-pap | [2024-10-31T17:04:28.328+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 17:06:12 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"9ed7852d-83d4-4534-b2e7-e1ff9ed08deb","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"8c646afb-62a4-4466-a133-5f1cd874b813","timestampMs":1730394268066,"name":"apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:06:12 policy-pap | [2024-10-31T17:04:28.329+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 9ed7852d-83d4-4534-b2e7-e1ff9ed08deb 17:06:12 policy-pap | [2024-10-31T17:04:28.330+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19 PdpUpdate stopping enqueue 17:06:12 policy-pap | [2024-10-31T17:04:28.331+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19 PdpUpdate stopping timer 17:06:12 policy-pap | [2024-10-31T17:04:28.333+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=9ed7852d-83d4-4534-b2e7-e1ff9ed08deb, expireMs=1730394297975] 17:06:12 policy-pap | [2024-10-31T17:04:28.333+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19 PdpUpdate stopping listener 17:06:12 policy-pap | [2024-10-31T17:04:28.333+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19 PdpUpdate stopped 17:06:12 policy-pap | [2024-10-31T17:04:28.338+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19 PdpUpdate successful 17:06:12 policy-pap | [2024-10-31T17:04:28.339+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19 start publishing next request 17:06:12 policy-pap | [2024-10-31T17:04:28.339+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19 PdpStateChange starting 17:06:12 policy-pap | [2024-10-31T17:04:28.339+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19 PdpStateChange starting listener 17:06:12 policy-pap | [2024-10-31T17:04:28.339+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19 PdpStateChange starting timer 17:06:12 policy-pap | [2024-10-31T17:04:28.339+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=9bc39ba9-5666-40d6-899e-758a2310f19c, expireMs=1730394298339] 17:06:12 policy-pap | [2024-10-31T17:04:28.339+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19 PdpStateChange starting enqueue 17:06:12 policy-pap | [2024-10-31T17:04:28.339+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19 PdpStateChange started 17:06:12 policy-pap | [2024-10-31T17:04:28.339+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 30000ms Timer [name=9bc39ba9-5666-40d6-899e-758a2310f19c, expireMs=1730394298339] 17:06:12 policy-pap | [2024-10-31T17:04:28.342+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 17:06:12 policy-pap | {"source":"pap-002a3449-2290-4722-aeec-12e93e1c8792","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"9bc39ba9-5666-40d6-899e-758a2310f19c","timestampMs":1730394267948,"name":"apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:06:12 policy-pap | [2024-10-31T17:04:28.426+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 17:06:12 policy-pap | {"source":"pap-002a3449-2290-4722-aeec-12e93e1c8792","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"9bc39ba9-5666-40d6-899e-758a2310f19c","timestampMs":1730394267948,"name":"apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:06:12 policy-pap | [2024-10-31T17:04:28.429+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE 17:06:12 policy-pap | [2024-10-31T17:04:28.438+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 17:06:12 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"9bc39ba9-5666-40d6-899e-758a2310f19c","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"e948d2f0-494f-43e0-ba71-3069ac68216f","timestampMs":1730394268422,"name":"apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:06:12 policy-pap | [2024-10-31T17:04:28.439+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 9bc39ba9-5666-40d6-899e-758a2310f19c 17:06:12 policy-pap | [2024-10-31T17:04:28.447+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:06:12 policy-pap | {"source":"pap-002a3449-2290-4722-aeec-12e93e1c8792","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"9bc39ba9-5666-40d6-899e-758a2310f19c","timestampMs":1730394267948,"name":"apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:06:12 policy-pap | [2024-10-31T17:04:28.448+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE 17:06:12 policy-pap | [2024-10-31T17:04:28.452+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:06:12 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"9bc39ba9-5666-40d6-899e-758a2310f19c","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"e948d2f0-494f-43e0-ba71-3069ac68216f","timestampMs":1730394268422,"name":"apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:06:12 policy-pap | [2024-10-31T17:04:28.453+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19 PdpStateChange stopping 17:06:12 policy-pap | [2024-10-31T17:04:28.453+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19 PdpStateChange stopping enqueue 17:06:12 policy-pap | [2024-10-31T17:04:28.453+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19 PdpStateChange stopping timer 17:06:12 policy-pap | [2024-10-31T17:04:28.453+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=9bc39ba9-5666-40d6-899e-758a2310f19c, expireMs=1730394298339] 17:06:12 policy-pap | [2024-10-31T17:04:28.453+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19 PdpStateChange stopping listener 17:06:12 policy-pap | [2024-10-31T17:04:28.453+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19 PdpStateChange stopped 17:06:12 policy-pap | [2024-10-31T17:04:28.454+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19 PdpStateChange successful 17:06:12 policy-pap | [2024-10-31T17:04:28.454+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19 start publishing next request 17:06:12 policy-pap | [2024-10-31T17:04:28.454+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19 PdpUpdate starting 17:06:12 policy-pap | [2024-10-31T17:04:28.455+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19 PdpUpdate starting listener 17:06:12 policy-pap | [2024-10-31T17:04:28.455+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19 PdpUpdate starting timer 17:06:12 policy-pap | [2024-10-31T17:04:28.455+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=ce65d777-a36c-40c6-8ed7-a771182706b4, expireMs=1730394298455] 17:06:12 policy-pap | [2024-10-31T17:04:28.455+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19 PdpUpdate starting enqueue 17:06:12 policy-pap | [2024-10-31T17:04:28.456+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19 PdpUpdate started 17:06:12 policy-pap | [2024-10-31T17:04:28.456+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 17:06:12 policy-pap | {"source":"pap-002a3449-2290-4722-aeec-12e93e1c8792","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"ce65d777-a36c-40c6-8ed7-a771182706b4","timestampMs":1730394268435,"name":"apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:06:12 policy-pap | [2024-10-31T17:04:28.466+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 17:06:12 policy-pap | {"source":"pap-002a3449-2290-4722-aeec-12e93e1c8792","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"ce65d777-a36c-40c6-8ed7-a771182706b4","timestampMs":1730394268435,"name":"apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:06:12 policy-pap | [2024-10-31T17:04:28.466+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 17:06:12 policy-pap | [2024-10-31T17:04:28.467+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:06:12 policy-pap | {"source":"pap-002a3449-2290-4722-aeec-12e93e1c8792","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"ce65d777-a36c-40c6-8ed7-a771182706b4","timestampMs":1730394268435,"name":"apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:06:12 policy-pap | [2024-10-31T17:04:28.467+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 17:06:12 policy-pap | [2024-10-31T17:04:28.480+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 17:06:12 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"ce65d777-a36c-40c6-8ed7-a771182706b4","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"0be9e106-8763-4ee6-9d38-7c1f659301ad","timestampMs":1730394268469,"name":"apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:06:12 policy-pap | [2024-10-31T17:04:28.481+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id ce65d777-a36c-40c6-8ed7-a771182706b4 17:06:12 policy-pap | [2024-10-31T17:04:28.483+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:06:12 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"ce65d777-a36c-40c6-8ed7-a771182706b4","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"0be9e106-8763-4ee6-9d38-7c1f659301ad","timestampMs":1730394268469,"name":"apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:06:12 policy-pap | [2024-10-31T17:04:28.484+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19 PdpUpdate stopping 17:06:12 policy-pap | [2024-10-31T17:04:28.484+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19 PdpUpdate stopping enqueue 17:06:12 policy-pap | [2024-10-31T17:04:28.484+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19 PdpUpdate stopping timer 17:06:12 policy-pap | [2024-10-31T17:04:28.484+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=ce65d777-a36c-40c6-8ed7-a771182706b4, expireMs=1730394298455] 17:06:12 policy-pap | [2024-10-31T17:04:28.486+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19 PdpUpdate stopping listener 17:06:12 policy-pap | [2024-10-31T17:04:28.486+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19 PdpUpdate stopped 17:06:12 policy-pap | [2024-10-31T17:04:28.493+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19 PdpUpdate successful 17:06:12 policy-pap | [2024-10-31T17:04:28.493+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-1f23917f-b9dd-45f7-9c99-37e38f2ffe19 has no more requests 17:06:12 policy-pap | [2024-10-31T17:04:41.588+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-3] Initializing Spring DispatcherServlet 'dispatcherServlet' 17:06:12 policy-pap | [2024-10-31T17:04:41.589+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Initializing Servlet 'dispatcherServlet' 17:06:12 policy-pap | [2024-10-31T17:04:41.591+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Completed initialization in 2 ms 17:06:12 policy-pap | [2024-10-31T17:04:57.976+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=9ed7852d-83d4-4534-b2e7-e1ff9ed08deb, expireMs=1730394297975] 17:06:12 policy-pap | [2024-10-31T17:04:58.340+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=9bc39ba9-5666-40d6-899e-758a2310f19c, expireMs=1730394298339] 17:06:12 policy-pap | [2024-10-31T17:05:02.358+00:00|WARN|NonInjectionManager|pool-2-thread-1] Falling back to injection-less client. 17:06:12 policy-pap | [2024-10-31T17:05:02.405+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 17:06:12 policy-pap | [2024-10-31T17:05:02.412+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 17:06:12 policy-pap | [2024-10-31T17:05:02.416+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 17:06:12 policy-pap | [2024-10-31T17:05:02.782+00:00|INFO|SessionData|http-nio-6969-exec-6] unknown group testGroup 17:06:12 policy-pap | [2024-10-31T17:05:03.344+00:00|INFO|SessionData|http-nio-6969-exec-6] create cached group testGroup 17:06:12 policy-pap | [2024-10-31T17:05:03.344+00:00|INFO|SessionData|http-nio-6969-exec-6] creating DB group testGroup 17:06:12 policy-pap | [2024-10-31T17:05:03.843+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup 17:06:12 policy-pap | [2024-10-31T17:05:04.030+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] Registering a deploy for policy onap.restart.tca 1.0.0 17:06:12 policy-pap | [2024-10-31T17:05:04.160+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] Registering a deploy for policy operational.apex.decisionMaker 1.0.0 17:06:12 policy-pap | [2024-10-31T17:05:04.160+00:00|INFO|SessionData|http-nio-6969-exec-1] update cached group testGroup 17:06:12 policy-pap | [2024-10-31T17:05:04.160+00:00|INFO|SessionData|http-nio-6969-exec-1] updating DB group testGroup 17:06:12 policy-pap | [2024-10-31T17:05:04.176+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2024-10-31T17:05:04Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=DEPLOYMENT, timestamp=2024-10-31T17:05:04Z, user=policyadmin)] 17:06:12 policy-pap | [2024-10-31T17:05:04.910+00:00|INFO|SessionData|http-nio-6969-exec-7] cache group testGroup 17:06:12 policy-pap | [2024-10-31T17:05:04.911+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-7] remove policy onap.restart.tca 1.0.0 from subgroup testGroup pdpTypeA count=0 17:06:12 policy-pap | [2024-10-31T17:05:04.911+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-7] Registering an undeploy for policy onap.restart.tca 1.0.0 17:06:12 policy-pap | [2024-10-31T17:05:04.911+00:00|INFO|SessionData|http-nio-6969-exec-7] update cached group testGroup 17:06:12 policy-pap | [2024-10-31T17:05:04.912+00:00|INFO|SessionData|http-nio-6969-exec-7] updating DB group testGroup 17:06:12 policy-pap | [2024-10-31T17:05:04.923+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-7] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2024-10-31T17:05:04Z, user=policyadmin)] 17:06:12 policy-pap | [2024-10-31T17:05:05.282+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group defaultGroup 17:06:12 policy-pap | [2024-10-31T17:05:05.282+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group testGroup 17:06:12 policy-pap | [2024-10-31T17:05:05.282+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-5] remove policy operational.apex.decisionMaker 1.0.0 from subgroup testGroup pdpTypeC count=0 17:06:12 policy-pap | [2024-10-31T17:05:05.283+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] Registering an undeploy for policy operational.apex.decisionMaker 1.0.0 17:06:12 policy-pap | [2024-10-31T17:05:05.283+00:00|INFO|SessionData|http-nio-6969-exec-5] update cached group testGroup 17:06:12 policy-pap | [2024-10-31T17:05:05.283+00:00|INFO|SessionData|http-nio-6969-exec-5] updating DB group testGroup 17:06:12 policy-pap | [2024-10-31T17:05:05.294+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=UNDEPLOYMENT, timestamp=2024-10-31T17:05:05Z, user=policyadmin)] 17:06:12 policy-pap | [2024-10-31T17:05:05.889+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup 17:06:12 policy-pap | [2024-10-31T17:05:05.890+00:00|INFO|SessionData|http-nio-6969-exec-1] deleting DB group testGroup 17:06:12 policy-pap | [2024-10-31T17:06:05.649+00:00|INFO|PdpModifyRequestMap|pool-3-thread-1] check for PDP records older than 360000ms 17:06:12 =================================== 17:06:12 ======== Logs from prometheus ======== 17:06:12 prometheus | ts=2024-10-31T17:03:26.729Z caller=main.go:627 level=info msg="No time or size retention was set so using the default time retention" duration=15d 17:06:12 prometheus | ts=2024-10-31T17:03:26.729Z caller=main.go:671 level=info msg="Starting Prometheus Server" mode=server version="(version=2.55.0, branch=HEAD, revision=91d80252c3e528728b0f88d254dd720f6be07cb8)" 17:06:12 prometheus | ts=2024-10-31T17:03:26.729Z caller=main.go:676 level=info build_context="(go=go1.23.2, platform=linux/amd64, user=root@9fad779131cc, date=20241022-13:47:22, tags=netgo,builtinassets,stringlabels)" 17:06:12 prometheus | ts=2024-10-31T17:03:26.729Z caller=main.go:677 level=info host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" 17:06:12 prometheus | ts=2024-10-31T17:03:26.729Z caller=main.go:678 level=info fd_limits="(soft=1048576, hard=1048576)" 17:06:12 prometheus | ts=2024-10-31T17:03:26.730Z caller=main.go:679 level=info vm_limits="(soft=unlimited, hard=unlimited)" 17:06:12 prometheus | ts=2024-10-31T17:03:26.732Z caller=web.go:585 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090 17:06:12 prometheus | ts=2024-10-31T17:03:26.733Z caller=main.go:1197 level=info msg="Starting TSDB ..." 17:06:12 prometheus | ts=2024-10-31T17:03:26.735Z caller=tls_config.go:348 level=info component=web msg="Listening on" address=[::]:9090 17:06:12 prometheus | ts=2024-10-31T17:03:26.735Z caller=tls_config.go:351 level=info component=web msg="TLS is disabled." http2=false address=[::]:9090 17:06:12 prometheus | ts=2024-10-31T17:03:26.739Z caller=head.go:627 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 17:06:12 prometheus | ts=2024-10-31T17:03:26.739Z caller=head.go:714 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=1.34µs 17:06:12 prometheus | ts=2024-10-31T17:03:26.739Z caller=head.go:722 level=info component=tsdb msg="Replaying WAL, this may take a while" 17:06:12 prometheus | ts=2024-10-31T17:03:26.739Z caller=head.go:794 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 17:06:12 prometheus | ts=2024-10-31T17:03:26.739Z caller=head.go:831 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=21.481µs wal_replay_duration=260.274µs wbl_replay_duration=170ns chunk_snapshot_load_duration=0s mmap_chunk_replay_duration=1.34µs total_replay_duration=303.586µs 17:06:12 prometheus | ts=2024-10-31T17:03:26.742Z caller=main.go:1218 level=info fs_type=EXT4_SUPER_MAGIC 17:06:12 prometheus | ts=2024-10-31T17:03:26.742Z caller=main.go:1221 level=info msg="TSDB started" 17:06:12 prometheus | ts=2024-10-31T17:03:26.742Z caller=main.go:1404 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 17:06:12 prometheus | ts=2024-10-31T17:03:26.743Z caller=main.go:1441 level=info msg="updated GOGC" old=100 new=75 17:06:12 prometheus | ts=2024-10-31T17:03:26.743Z caller=main.go:1452 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=1.057908ms db_storage=1µs remote_storage=1.81µs web_handler=610ns query_engine=1.44µs scrape=315.657µs scrape_sd=136.682µs notify=27.25µs notify_sd=10.091µs rules=1.48µs tracing=4.69µs 17:06:12 prometheus | ts=2024-10-31T17:03:26.743Z caller=main.go:1182 level=info msg="Server is ready to receive web requests." 17:06:12 prometheus | ts=2024-10-31T17:03:26.743Z caller=manager.go:164 level=info component="rule manager" msg="Starting rule manager..." 17:06:12 =================================== 17:06:12 ======== Logs from simulator ======== 17:06:12 simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json 17:06:12 simulator | overriding logback.xml 17:06:12 simulator | 2024-10-31 17:03:28,041 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json 17:06:12 simulator | 2024-10-31 17:03:28,121 INFO org.onap.policy.models.simulators starting 17:06:12 simulator | 2024-10-31 17:03:28,121 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties 17:06:12 simulator | 2024-10-31 17:03:28,299 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION 17:06:12 simulator | 2024-10-31 17:03:28,300 INFO org.onap.policy.models.simulators starting A&AI simulator 17:06:12 simulator | 2024-10-31 17:03:28,403 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 17:06:12 simulator | 2024-10-31 17:03:28,414 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 17:06:12 simulator | 2024-10-31 17:03:28,416 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 17:06:12 simulator | 2024-10-31 17:03:28,421 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 17:06:12 simulator | 2024-10-31 17:03:28,475 INFO Session workerName=node0 17:06:12 simulator | 2024-10-31 17:03:29,067 INFO Using GSON for REST calls 17:06:12 simulator | 2024-10-31 17:03:29,155 INFO Started o.e.j.s.ServletContextHandler@3f6db3fb{/,null,AVAILABLE} 17:06:12 simulator | 2024-10-31 17:03:29,163 INFO Started A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} 17:06:12 simulator | 2024-10-31 17:03:29,169 INFO Started Server@64a8c844{STARTING}[11.0.20,sto=0] @1547ms 17:06:12 simulator | 2024-10-31 17:03:29,170 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,AVAILABLE}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4247 ms. 17:06:12 simulator | 2024-10-31 17:03:29,175 INFO org.onap.policy.models.simulators starting SDNC simulator 17:06:12 simulator | 2024-10-31 17:03:29,183 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 17:06:12 simulator | 2024-10-31 17:03:29,184 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 17:06:12 simulator | 2024-10-31 17:03:29,185 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 17:06:12 simulator | 2024-10-31 17:03:29,186 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 17:06:12 simulator | 2024-10-31 17:03:29,193 INFO Session workerName=node0 17:06:12 simulator | 2024-10-31 17:03:29,255 INFO Using GSON for REST calls 17:06:12 simulator | 2024-10-31 17:03:29,265 INFO Started o.e.j.s.ServletContextHandler@b70da4c{/,null,AVAILABLE} 17:06:12 simulator | 2024-10-31 17:03:29,268 INFO Started SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} 17:06:12 simulator | 2024-10-31 17:03:29,269 INFO Started Server@70efb718{STARTING}[11.0.20,sto=0] @1646ms 17:06:12 simulator | 2024-10-31 17:03:29,269 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,AVAILABLE}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4916 ms. 17:06:12 simulator | 2024-10-31 17:03:29,270 INFO org.onap.policy.models.simulators starting SO simulator 17:06:12 simulator | 2024-10-31 17:03:29,272 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 17:06:12 simulator | 2024-10-31 17:03:29,272 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 17:06:12 simulator | 2024-10-31 17:03:29,273 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 17:06:12 simulator | 2024-10-31 17:03:29,274 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 17:06:12 simulator | 2024-10-31 17:03:29,277 INFO Session workerName=node0 17:06:12 simulator | 2024-10-31 17:03:29,334 INFO Using GSON for REST calls 17:06:12 simulator | 2024-10-31 17:03:29,346 INFO Started o.e.j.s.ServletContextHandler@696f0212{/,null,AVAILABLE} 17:06:12 simulator | 2024-10-31 17:03:29,350 INFO Started SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} 17:06:12 simulator | 2024-10-31 17:03:29,350 INFO Started Server@b7838a9{STARTING}[11.0.20,sto=0] @1728ms 17:06:12 simulator | 2024-10-31 17:03:29,350 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,AVAILABLE}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4923 ms. 17:06:12 simulator | 2024-10-31 17:03:29,351 INFO org.onap.policy.models.simulators starting VFC simulator 17:06:12 simulator | 2024-10-31 17:03:29,355 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 17:06:12 simulator | 2024-10-31 17:03:29,355 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 17:06:12 simulator | 2024-10-31 17:03:29,355 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 17:06:12 simulator | 2024-10-31 17:03:29,356 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 17:06:12 simulator | 2024-10-31 17:03:29,357 INFO Session workerName=node0 17:06:12 simulator | 2024-10-31 17:03:29,402 INFO Using GSON for REST calls 17:06:12 simulator | 2024-10-31 17:03:29,412 INFO Started o.e.j.s.ServletContextHandler@19553973{/,null,AVAILABLE} 17:06:12 simulator | 2024-10-31 17:03:29,413 INFO Started VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670} 17:06:12 simulator | 2024-10-31 17:03:29,413 INFO Started Server@f478a81{STARTING}[11.0.20,sto=0] @1790ms 17:06:12 simulator | 2024-10-31 17:03:29,413 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,AVAILABLE}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4942 ms. 17:06:12 simulator | 2024-10-31 17:03:29,413 INFO org.onap.policy.models.simulators started 17:06:12 =================================== 17:06:12 ======== Logs from zookeeper ======== 17:06:12 zookeeper | ===> User 17:06:12 zookeeper | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 17:06:12 zookeeper | ===> Configuring ... 17:06:12 zookeeper | ===> Running preflight checks ... 17:06:12 zookeeper | ===> Check if /var/lib/zookeeper/data is writable ... 17:06:12 zookeeper | ===> Check if /var/lib/zookeeper/log is writable ... 17:06:12 zookeeper | ===> Launching ... 17:06:12 zookeeper | ===> Launching zookeeper ... 17:06:12 zookeeper | [2024-10-31 17:03:34,137] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 17:06:12 zookeeper | [2024-10-31 17:03:34,139] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 17:06:12 zookeeper | [2024-10-31 17:03:34,139] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 17:06:12 zookeeper | [2024-10-31 17:03:34,139] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 17:06:12 zookeeper | [2024-10-31 17:03:34,139] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 17:06:12 zookeeper | [2024-10-31 17:03:34,140] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) 17:06:12 zookeeper | [2024-10-31 17:03:34,140] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) 17:06:12 zookeeper | [2024-10-31 17:03:34,140] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) 17:06:12 zookeeper | [2024-10-31 17:03:34,140] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) 17:06:12 zookeeper | [2024-10-31 17:03:34,141] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) 17:06:12 zookeeper | [2024-10-31 17:03:34,142] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 17:06:12 zookeeper | [2024-10-31 17:03:34,142] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 17:06:12 zookeeper | [2024-10-31 17:03:34,142] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 17:06:12 zookeeper | [2024-10-31 17:03:34,142] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 17:06:12 zookeeper | [2024-10-31 17:03:34,142] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 17:06:12 zookeeper | [2024-10-31 17:03:34,142] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) 17:06:12 zookeeper | [2024-10-31 17:03:34,152] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@75c072cb (org.apache.zookeeper.server.ServerMetrics) 17:06:12 zookeeper | [2024-10-31 17:03:34,154] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 17:06:12 zookeeper | [2024-10-31 17:03:34,154] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 17:06:12 zookeeper | [2024-10-31 17:03:34,156] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 17:06:12 zookeeper | [2024-10-31 17:03:34,163] INFO (org.apache.zookeeper.server.ZooKeeperServer) 17:06:12 zookeeper | [2024-10-31 17:03:34,163] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) 17:06:12 zookeeper | [2024-10-31 17:03:34,163] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) 17:06:12 zookeeper | [2024-10-31 17:03:34,163] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) 17:06:12 zookeeper | [2024-10-31 17:03:34,163] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) 17:06:12 zookeeper | [2024-10-31 17:03:34,163] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) 17:06:12 zookeeper | [2024-10-31 17:03:34,163] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) 17:06:12 zookeeper | [2024-10-31 17:03:34,163] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) 17:06:12 zookeeper | [2024-10-31 17:03:34,163] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) 17:06:12 zookeeper | [2024-10-31 17:03:34,163] INFO (org.apache.zookeeper.server.ZooKeeperServer) 17:06:12 zookeeper | [2024-10-31 17:03:34,164] INFO Server environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.server.ZooKeeperServer) 17:06:12 zookeeper | [2024-10-31 17:03:34,164] INFO Server environment:host.name=zookeeper (org.apache.zookeeper.server.ZooKeeperServer) 17:06:12 zookeeper | [2024-10-31 17:03:34,164] INFO Server environment:java.version=17.0.12 (org.apache.zookeeper.server.ZooKeeperServer) 17:06:12 zookeeper | [2024-10-31 17:03:34,164] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer) 17:06:12 zookeeper | [2024-10-31 17:03:34,164] INFO Server environment:java.home=/usr/lib/jvm/java-17-zulu-openjdk-ca (org.apache.zookeeper.server.ZooKeeperServer) 17:06:12 zookeeper | [2024-10-31 17:03:34,165] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/connect-transforms-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/protobuf-java-3.23.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-raft-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/connect-runtime-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/connect-json-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/netty-common-4.1.110.Final.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.16.2.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.110.Final.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-server-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/connect-mirror-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.16.2.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.16.2.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.110.Final.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-clients-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.110.Final.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/scala-library-2.13.12.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.110.Final.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-streams-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.110.Final.jar:/usr/bin/../share/java/kafka/kafka-shell-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.110.Final.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.110.Final.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/connect-api-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.110.Final.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.6-4.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.12.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jackson-databind-2.16.2.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/opentelemetry-proto-1.0.0-alpha.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.16.2.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/trogdor-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-tools-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/jackson-core-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.7.1-ccs.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.7.1-ccs.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) 17:06:12 zookeeper | [2024-10-31 17:03:34,165] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) 17:06:12 zookeeper | [2024-10-31 17:03:34,165] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) 17:06:12 zookeeper | [2024-10-31 17:03:34,165] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) 17:06:12 zookeeper | [2024-10-31 17:03:34,165] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) 17:06:12 zookeeper | [2024-10-31 17:03:34,165] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) 17:06:12 zookeeper | [2024-10-31 17:03:34,165] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) 17:06:12 zookeeper | [2024-10-31 17:03:34,165] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) 17:06:12 zookeeper | [2024-10-31 17:03:34,165] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 17:06:12 zookeeper | [2024-10-31 17:03:34,165] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 17:06:12 zookeeper | [2024-10-31 17:03:34,165] INFO Server environment:os.memory.free=495MB (org.apache.zookeeper.server.ZooKeeperServer) 17:06:12 zookeeper | [2024-10-31 17:03:34,165] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) 17:06:12 zookeeper | [2024-10-31 17:03:34,165] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) 17:06:12 zookeeper | [2024-10-31 17:03:34,166] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) 17:06:12 zookeeper | [2024-10-31 17:03:34,166] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 17:06:12 zookeeper | [2024-10-31 17:03:34,166] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 17:06:12 zookeeper | [2024-10-31 17:03:34,166] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 17:06:12 zookeeper | [2024-10-31 17:03:34,166] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 17:06:12 zookeeper | [2024-10-31 17:03:34,166] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) 17:06:12 zookeeper | [2024-10-31 17:03:34,166] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) 17:06:12 zookeeper | [2024-10-31 17:03:34,167] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) 17:06:12 zookeeper | [2024-10-31 17:03:34,168] INFO minSessionTimeout set to 6000 ms (org.apache.zookeeper.server.ZooKeeperServer) 17:06:12 zookeeper | [2024-10-31 17:03:34,168] INFO maxSessionTimeout set to 60000 ms (org.apache.zookeeper.server.ZooKeeperServer) 17:06:12 zookeeper | [2024-10-31 17:03:34,169] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 17:06:12 zookeeper | [2024-10-31 17:03:34,169] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 17:06:12 zookeeper | [2024-10-31 17:03:34,170] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 17:06:12 zookeeper | [2024-10-31 17:03:34,170] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 17:06:12 zookeeper | [2024-10-31 17:03:34,170] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 17:06:12 zookeeper | [2024-10-31 17:03:34,170] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 17:06:12 zookeeper | [2024-10-31 17:03:34,170] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 17:06:12 zookeeper | [2024-10-31 17:03:34,170] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 17:06:12 zookeeper | [2024-10-31 17:03:34,172] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) 17:06:12 zookeeper | [2024-10-31 17:03:34,172] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) 17:06:12 zookeeper | [2024-10-31 17:03:34,172] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) 17:06:12 zookeeper | [2024-10-31 17:03:34,172] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) 17:06:12 zookeeper | [2024-10-31 17:03:34,172] INFO Created server with tickTime 3000 ms minSessionTimeout 6000 ms maxSessionTimeout 60000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) 17:06:12 zookeeper | [2024-10-31 17:03:34,191] INFO Logging initialized @360ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) 17:06:12 zookeeper | [2024-10-31 17:03:34,238] WARN o.e.j.s.ServletContextHandler@f5958c9{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) 17:06:12 zookeeper | [2024-10-31 17:03:34,238] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) 17:06:12 zookeeper | [2024-10-31 17:03:34,253] INFO jetty-9.4.54.v20240208; built: 2024-02-08T19:42:39.027Z; git: cef3fbd6d736a21e7d541a5db490381d95a2047d; jvm 17.0.12+7-LTS (org.eclipse.jetty.server.Server) 17:06:12 zookeeper | [2024-10-31 17:03:34,273] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) 17:06:12 zookeeper | [2024-10-31 17:03:34,273] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) 17:06:12 zookeeper | [2024-10-31 17:03:34,274] INFO node0 Scavenging every 600000ms (org.eclipse.jetty.server.session) 17:06:12 zookeeper | [2024-10-31 17:03:34,284] WARN ServletContext@o.e.j.s.ServletContextHandler@f5958c9{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) 17:06:12 zookeeper | [2024-10-31 17:03:34,292] INFO Started o.e.j.s.ServletContextHandler@f5958c9{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) 17:06:12 zookeeper | [2024-10-31 17:03:34,307] INFO Started ServerConnector@436813f3{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) 17:06:12 zookeeper | [2024-10-31 17:03:34,307] INFO Started @480ms (org.eclipse.jetty.server.Server) 17:06:12 zookeeper | [2024-10-31 17:03:34,307] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) 17:06:12 zookeeper | [2024-10-31 17:03:34,312] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) 17:06:12 zookeeper | [2024-10-31 17:03:34,313] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) 17:06:12 zookeeper | [2024-10-31 17:03:34,314] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) 17:06:12 zookeeper | [2024-10-31 17:03:34,316] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) 17:06:12 zookeeper | [2024-10-31 17:03:34,326] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 17:06:12 zookeeper | [2024-10-31 17:03:34,326] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 17:06:12 zookeeper | [2024-10-31 17:03:34,326] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) 17:06:12 zookeeper | [2024-10-31 17:03:34,326] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) 17:06:12 zookeeper | [2024-10-31 17:03:34,330] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) 17:06:12 zookeeper | [2024-10-31 17:03:34,330] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 17:06:12 zookeeper | [2024-10-31 17:03:34,333] INFO Snapshot loaded in 6 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) 17:06:12 zookeeper | [2024-10-31 17:03:34,333] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 17:06:12 zookeeper | [2024-10-31 17:03:34,334] INFO Snapshot taken in 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 17:06:12 zookeeper | [2024-10-31 17:03:34,342] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) 17:06:12 zookeeper | [2024-10-31 17:03:34,343] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) 17:06:12 zookeeper | [2024-10-31 17:03:34,356] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) 17:06:12 zookeeper | [2024-10-31 17:03:34,356] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) 17:06:12 zookeeper | [2024-10-31 17:03:35,343] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) 17:06:12 =================================== 17:06:12 Tearing down containers... 17:06:12 Container policy-csit Stopping 17:06:12 Container grafana Stopping 17:06:12 Container policy-csit Stopped 17:06:12 Container policy-csit Removing 17:06:12 Container policy-apex-pdp Stopping 17:06:12 Container policy-csit Removed 17:06:12 Container grafana Stopped 17:06:12 Container grafana Removing 17:06:12 Container grafana Removed 17:06:12 Container prometheus Stopping 17:06:13 Container prometheus Stopped 17:06:13 Container prometheus Removing 17:06:13 Container prometheus Removed 17:06:22 Container policy-apex-pdp Stopped 17:06:22 Container policy-apex-pdp Removing 17:06:22 Container policy-apex-pdp Removed 17:06:22 Container simulator Stopping 17:06:22 Container policy-pap Stopping 17:06:33 Container simulator Stopped 17:06:33 Container simulator Removing 17:06:33 Container simulator Removed 17:06:33 Container policy-pap Stopped 17:06:33 Container policy-pap Removing 17:06:33 Container policy-pap Removed 17:06:33 Container policy-api Stopping 17:06:33 Container kafka Stopping 17:06:34 Container kafka Stopped 17:06:34 Container kafka Removing 17:06:34 Container kafka Removed 17:06:34 Container zookeeper Stopping 17:06:34 Container zookeeper Stopped 17:06:34 Container zookeeper Removing 17:06:34 Container zookeeper Removed 17:06:43 Container policy-api Stopped 17:06:43 Container policy-api Removing 17:06:43 Container policy-api Removed 17:06:43 Container policy-db-migrator Stopping 17:06:43 Container policy-db-migrator Stopped 17:06:43 Container policy-db-migrator Removing 17:06:43 Container policy-db-migrator Removed 17:06:43 Container mariadb Stopping 17:06:44 Container mariadb Stopped 17:06:44 Container mariadb Removing 17:06:44 Container mariadb Removed 17:06:44 Network compose_default Removing 17:06:44 Network compose_default Removed 17:06:44 $ ssh-agent -k 17:06:44 unset SSH_AUTH_SOCK; 17:06:44 unset SSH_AGENT_PID; 17:06:44 echo Agent pid 2124 killed; 17:06:44 [ssh-agent] Stopped. 17:06:44 Robot results publisher started... 17:06:44 INFO: Checking test criticality is deprecated and will be dropped in a future release! 17:06:44 -Parsing output xml: 17:06:45 Done! 17:06:45 -Copying log files to build dir: 17:06:45 Done! 17:06:45 -Assigning results to build: 17:06:45 Done! 17:06:45 -Checking thresholds: 17:06:45 Done! 17:06:45 Done publishing Robot results. 17:06:45 [PostBuildScript] - [INFO] Executing post build scripts. 17:06:45 [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins139317273580588579.sh 17:06:45 ---> sysstat.sh 17:06:45 [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins14011373035612294058.sh 17:06:45 ---> package-listing.sh 17:06:45 ++ facter osfamily 17:06:45 ++ tr '[:upper:]' '[:lower:]' 17:06:45 + OS_FAMILY=debian 17:06:45 + workspace=/w/workspace/policy-pap-newdelhi-project-csit-pap 17:06:45 + START_PACKAGES=/tmp/packages_start.txt 17:06:45 + END_PACKAGES=/tmp/packages_end.txt 17:06:45 + DIFF_PACKAGES=/tmp/packages_diff.txt 17:06:45 + PACKAGES=/tmp/packages_start.txt 17:06:45 + '[' /w/workspace/policy-pap-newdelhi-project-csit-pap ']' 17:06:45 + PACKAGES=/tmp/packages_end.txt 17:06:45 + case "${OS_FAMILY}" in 17:06:45 + dpkg -l 17:06:45 + grep '^ii' 17:06:45 + '[' -f /tmp/packages_start.txt ']' 17:06:45 + '[' -f /tmp/packages_end.txt ']' 17:06:45 + diff /tmp/packages_start.txt /tmp/packages_end.txt 17:06:45 + '[' /w/workspace/policy-pap-newdelhi-project-csit-pap ']' 17:06:45 + mkdir -p /w/workspace/policy-pap-newdelhi-project-csit-pap/archives/ 17:06:45 + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-pap-newdelhi-project-csit-pap/archives/ 17:06:45 [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins11959823730079699091.sh 17:06:45 ---> capture-instance-metadata.sh 17:06:45 Setup pyenv: 17:06:45 system 17:06:45 3.8.13 17:06:45 3.9.13 17:06:46 * 3.10.6 (set by /w/workspace/policy-pap-newdelhi-project-csit-pap/.python-version) 17:06:46 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-xITB from file:/tmp/.os_lf_venv 17:06:47 lf-activate-venv(): INFO: Installing: lftools 17:06:54 lf-activate-venv(): INFO: Adding /tmp/venv-xITB/bin to PATH 17:06:54 INFO: Running in OpenStack, capturing instance metadata 17:06:55 [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins4501212016526413709.sh 17:06:55 provisioning config files... 17:06:55 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-pap-newdelhi-project-csit-pap@tmp/config4553738332426548624tmp 17:06:55 Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] 17:06:55 Run condition [Regular expression match] preventing perform for step [Provide Configuration files] 17:06:55 [EnvInject] - Injecting environment variables from a build step. 17:06:55 [EnvInject] - Injecting as environment variables the properties content 17:06:55 SERVER_ID=logs 17:06:55 17:06:55 [EnvInject] - Variables injected successfully. 17:06:55 [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins16819280041720695521.sh 17:06:55 ---> create-netrc.sh 17:06:55 [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins5191374772430768736.sh 17:06:55 ---> python-tools-install.sh 17:06:55 Setup pyenv: 17:06:55 system 17:06:55 3.8.13 17:06:55 3.9.13 17:06:55 * 3.10.6 (set by /w/workspace/policy-pap-newdelhi-project-csit-pap/.python-version) 17:06:55 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-xITB from file:/tmp/.os_lf_venv 17:06:56 lf-activate-venv(): INFO: Installing: lftools 17:07:04 lf-activate-venv(): INFO: Adding /tmp/venv-xITB/bin to PATH 17:07:04 [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins8770270684656204313.sh 17:07:04 ---> sudo-logs.sh 17:07:04 Archiving 'sudo' log.. 17:07:04 [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins9034049243699799846.sh 17:07:04 ---> job-cost.sh 17:07:04 Setup pyenv: 17:07:04 system 17:07:04 3.8.13 17:07:04 3.9.13 17:07:04 * 3.10.6 (set by /w/workspace/policy-pap-newdelhi-project-csit-pap/.python-version) 17:07:04 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-xITB from file:/tmp/.os_lf_venv 17:07:05 lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 17:07:09 lf-activate-venv(): INFO: Adding /tmp/venv-xITB/bin to PATH 17:07:09 INFO: No Stack... 17:07:09 INFO: Retrieving Pricing Info for: v3-standard-8 17:07:09 INFO: Archiving Costs 17:07:09 [policy-pap-newdelhi-project-csit-pap] $ /bin/bash -l /tmp/jenkins10227916582339121406.sh 17:07:09 ---> logs-deploy.sh 17:07:09 Setup pyenv: 17:07:09 system 17:07:09 3.8.13 17:07:09 3.9.13 17:07:09 * 3.10.6 (set by /w/workspace/policy-pap-newdelhi-project-csit-pap/.python-version) 17:07:09 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-xITB from file:/tmp/.os_lf_venv 17:07:10 lf-activate-venv(): INFO: Installing: lftools 17:07:18 lf-activate-venv(): INFO: Adding /tmp/venv-xITB/bin to PATH 17:07:18 INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-pap-newdelhi-project-csit-pap/166 17:07:18 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt 17:07:19 Archives upload complete. 17:07:19 INFO: archiving logs to Nexus 17:07:20 ---> uname -a: 17:07:20 Linux prd-ubuntu1804-docker-8c-8g-80667 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux 17:07:20 17:07:20 17:07:20 ---> lscpu: 17:07:20 Architecture: x86_64 17:07:20 CPU op-mode(s): 32-bit, 64-bit 17:07:20 Byte Order: Little Endian 17:07:20 CPU(s): 8 17:07:20 On-line CPU(s) list: 0-7 17:07:20 Thread(s) per core: 1 17:07:20 Core(s) per socket: 1 17:07:20 Socket(s): 8 17:07:20 NUMA node(s): 1 17:07:20 Vendor ID: AuthenticAMD 17:07:20 CPU family: 23 17:07:20 Model: 49 17:07:20 Model name: AMD EPYC-Rome Processor 17:07:20 Stepping: 0 17:07:20 CPU MHz: 2799.996 17:07:20 BogoMIPS: 5599.99 17:07:20 Virtualization: AMD-V 17:07:20 Hypervisor vendor: KVM 17:07:20 Virtualization type: full 17:07:20 L1d cache: 32K 17:07:20 L1i cache: 32K 17:07:20 L2 cache: 512K 17:07:20 L3 cache: 16384K 17:07:20 NUMA node0 CPU(s): 0-7 17:07:20 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities 17:07:20 17:07:20 17:07:20 ---> nproc: 17:07:20 8 17:07:20 17:07:20 17:07:20 ---> df -h: 17:07:20 Filesystem Size Used Avail Use% Mounted on 17:07:20 udev 16G 0 16G 0% /dev 17:07:20 tmpfs 3.2G 708K 3.2G 1% /run 17:07:20 /dev/vda1 155G 14G 142G 9% / 17:07:20 tmpfs 16G 0 16G 0% /dev/shm 17:07:20 tmpfs 5.0M 0 5.0M 0% /run/lock 17:07:20 tmpfs 16G 0 16G 0% /sys/fs/cgroup 17:07:20 /dev/vda15 105M 4.4M 100M 5% /boot/efi 17:07:20 tmpfs 3.2G 0 3.2G 0% /run/user/1001 17:07:20 17:07:20 17:07:20 ---> free -m: 17:07:20 total used free shared buff/cache available 17:07:20 Mem: 32167 894 25032 0 6239 30817 17:07:20 Swap: 1023 0 1023 17:07:20 17:07:20 17:07:20 ---> ip addr: 17:07:20 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 17:07:20 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 17:07:20 inet 127.0.0.1/8 scope host lo 17:07:20 valid_lft forever preferred_lft forever 17:07:20 inet6 ::1/128 scope host 17:07:20 valid_lft forever preferred_lft forever 17:07:20 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 17:07:20 link/ether fa:16:3e:be:f8:29 brd ff:ff:ff:ff:ff:ff 17:07:20 inet 10.30.107.198/23 brd 10.30.107.255 scope global dynamic ens3 17:07:20 valid_lft 86025sec preferred_lft 86025sec 17:07:20 inet6 fe80::f816:3eff:febe:f829/64 scope link 17:07:20 valid_lft forever preferred_lft forever 17:07:20 3: docker0: mtu 1500 qdisc noqueue state DOWN group default 17:07:20 link/ether 02:42:4d:cb:c2:5c brd ff:ff:ff:ff:ff:ff 17:07:20 inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 17:07:20 valid_lft forever preferred_lft forever 17:07:20 inet6 fe80::42:4dff:fecb:c25c/64 scope link 17:07:20 valid_lft forever preferred_lft forever 17:07:20 17:07:20 17:07:20 ---> sar -b -r -n DEV: 17:07:20 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-80667) 10/31/24 _x86_64_ (8 CPU) 17:07:20 17:07:20 17:01:07 LINUX RESTART (8 CPU) 17:07:20 17:07:20 17:02:01 tps rtps wtps bread/s bwrtn/s 17:07:20 17:03:02 253.32 32.29 221.03 3232.72 73919.33 17:07:20 17:04:01 460.67 12.01 448.65 791.97 134570.70 17:07:20 17:05:01 145.46 0.35 145.11 37.06 39866.69 17:07:20 17:06:01 20.06 0.00 20.06 0.00 23100.93 17:07:20 17:07:01 79.27 2.25 77.02 125.33 24330.40 17:07:20 Average: 190.89 9.38 181.51 838.71 58913.62 17:07:20 17:07:20 17:02:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 17:07:20 17:03:02 26272812 31615152 6666408 20.24 121248 5386016 1923732 5.66 1033348 5141132 2491780 17:07:20 17:04:01 24057308 29813732 8881912 26.96 141508 5750304 8560092 25.19 3024308 5282184 460 17:07:20 17:05:01 23420028 29495424 9519192 28.90 171156 5999836 9182016 27.02 3422256 5468640 1284 17:07:20 17:06:01 23405484 29481972 9533736 28.94 171356 6000580 9182324 27.02 3436744 5468140 232 17:07:20 17:07:01 25623048 31544164 7316172 22.21 173168 5860668 1667052 4.90 1431688 5324736 18052 17:07:20 Average: 24555736 30390089 8383484 25.45 155687 5799481 6103043 17.96 2469669 5336966 502362 17:07:20 17:07:20 17:02:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 17:07:20 17:03:02 ens3 1167.88 614.48 32185.96 56.10 0.00 0.00 0.00 0.00 17:07:20 17:03:02 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:07:20 17:03:02 lo 15.16 15.16 1.48 1.48 0.00 0.00 0.00 0.00 17:07:20 17:04:01 veth0b1996e 0.14 0.58 0.01 0.03 0.00 0.00 0.00 0.00 17:07:20 17:04:01 veth1bb6763 53.09 64.28 19.41 15.38 0.00 0.00 0.00 0.00 17:07:20 17:04:01 veth2d3b1cb 24.56 22.71 10.70 16.35 0.00 0.00 0.00 0.00 17:07:20 17:04:01 vethf400cc3 0.73 0.92 0.05 0.05 0.00 0.00 0.00 0.00 17:07:20 17:05:01 veth0b1996e 0.42 0.42 0.04 1.12 0.00 0.00 0.00 0.00 17:07:20 17:05:01 veth1bb6763 18.85 23.40 23.04 6.04 0.00 0.00 0.00 0.00 17:07:20 17:05:01 veth2d3b1cb 21.66 17.48 6.62 23.76 0.00 0.00 0.00 0.00 17:07:20 17:05:01 vethf400cc3 4.00 5.15 0.79 0.52 0.00 0.00 0.00 0.00 17:07:20 17:06:01 veth0b1996e 0.60 0.62 0.05 1.52 0.00 0.00 0.00 0.00 17:07:20 17:06:01 veth1bb6763 29.30 36.01 37.16 11.69 0.00 0.00 0.00 0.00 17:07:20 17:06:01 veth2d3b1cb 0.53 0.53 0.63 0.08 0.00 0.00 0.00 0.00 17:07:20 17:06:01 vethf400cc3 3.23 4.70 0.66 0.36 0.00 0.00 0.00 0.00 17:07:20 17:07:01 ens3 1644.30 959.50 34631.37 167.06 0.00 0.00 0.00 0.00 17:07:20 17:07:01 docker0 15.47 20.25 2.24 286.91 0.00 0.00 0.00 0.00 17:07:20 17:07:01 lo 27.87 27.87 2.58 2.58 0.00 0.00 0.00 0.00 17:07:20 Average: ens3 264.62 145.00 6750.37 19.03 0.00 0.00 0.00 0.00 17:07:20 Average: docker0 3.10 4.06 0.45 57.54 0.00 0.00 0.00 0.00 17:07:20 Average: lo 4.65 4.65 0.43 0.43 0.00 0.00 0.00 0.00 17:07:20 17:07:20 17:07:20 ---> sar -P ALL: 17:07:20 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-80667) 10/31/24 _x86_64_ (8 CPU) 17:07:20 17:07:20 17:01:07 LINUX RESTART (8 CPU) 17:07:20 17:07:20 17:02:01 CPU %user %nice %system %iowait %steal %idle 17:07:20 17:03:02 all 16.37 0.00 5.16 4.98 0.07 73.42 17:07:20 17:03:02 0 9.90 0.00 4.90 14.68 0.05 70.48 17:07:20 17:03:02 1 10.19 0.00 4.77 0.71 0.10 84.23 17:07:20 17:03:02 2 10.02 0.00 5.16 6.96 0.07 77.79 17:07:20 17:03:02 3 10.82 0.00 4.26 1.12 0.03 83.76 17:07:20 17:03:02 4 47.13 0.00 6.24 10.73 0.10 35.80 17:07:20 17:03:02 5 18.56 0.00 5.35 0.99 0.07 75.03 17:07:20 17:03:02 6 13.86 0.00 5.51 1.83 0.03 78.77 17:07:20 17:03:02 7 10.43 0.00 5.14 2.89 0.05 81.50 17:07:20 17:04:01 all 20.55 0.00 3.73 8.76 0.08 66.88 17:07:20 17:04:01 0 19.51 0.00 3.52 1.31 0.07 75.59 17:07:20 17:04:01 1 22.87 0.00 5.09 34.32 0.10 37.62 17:07:20 17:04:01 2 21.86 0.00 3.80 16.74 0.09 57.52 17:07:20 17:04:01 3 23.27 0.00 3.50 5.23 0.09 67.91 17:07:20 17:04:01 4 20.43 0.00 3.62 2.68 0.07 73.21 17:07:20 17:04:01 5 19.35 0.00 3.73 1.62 0.09 75.22 17:07:20 17:04:01 6 14.86 0.00 2.75 6.28 0.09 76.03 17:07:20 17:04:01 7 22.24 0.00 3.87 2.00 0.09 71.80 17:07:20 17:05:01 all 13.72 0.00 2.31 2.36 0.08 81.54 17:07:20 17:05:01 0 11.11 0.00 2.23 1.22 0.07 85.38 17:07:20 17:05:01 1 12.76 0.00 2.07 0.69 0.07 84.41 17:07:20 17:05:01 2 15.39 0.00 2.49 1.06 0.07 80.99 17:07:20 17:05:01 3 13.41 0.00 1.99 4.58 0.07 79.95 17:07:20 17:05:01 4 12.46 0.00 2.38 3.82 0.08 81.25 17:07:20 17:05:01 5 16.56 0.00 1.90 0.66 0.08 80.80 17:07:20 17:05:01 6 11.79 0.00 2.46 2.12 0.07 83.57 17:07:20 17:05:01 7 16.26 0.00 2.94 4.69 0.10 76.02 17:07:20 17:06:01 all 3.48 0.00 0.34 1.09 0.06 95.04 17:07:20 17:06:01 0 2.47 0.00 0.43 0.02 0.07 97.01 17:07:20 17:06:01 1 2.92 0.00 0.15 0.02 0.03 96.88 17:07:20 17:06:01 2 2.17 0.00 0.27 0.00 0.07 97.49 17:07:20 17:06:01 3 4.39 0.00 0.39 0.00 0.07 95.16 17:07:20 17:06:01 4 3.64 0.00 0.27 0.02 0.02 96.06 17:07:20 17:06:01 5 4.57 0.00 0.43 0.02 0.05 94.93 17:07:20 17:06:01 6 4.29 0.00 0.52 0.12 0.07 95.01 17:07:20 17:06:01 7 3.38 0.00 0.27 8.54 0.07 87.74 17:07:20 17:07:01 all 4.68 0.00 0.80 1.47 0.05 93.00 17:07:20 17:07:01 0 1.35 0.00 0.83 0.88 0.05 96.88 17:07:20 17:07:01 1 14.03 0.00 0.92 0.37 0.05 84.63 17:07:20 17:07:01 2 3.15 0.00 0.79 0.10 0.05 95.92 17:07:20 17:07:01 3 1.91 0.00 0.65 0.03 0.03 97.37 17:07:20 17:07:01 4 3.89 0.00 0.92 0.17 0.05 94.98 17:07:20 17:07:01 5 2.31 0.00 0.74 0.10 0.05 96.81 17:07:20 17:07:01 6 8.33 0.00 0.84 0.30 0.05 90.48 17:07:20 17:07:01 7 2.49 0.00 0.72 9.79 0.05 86.94 17:07:20 Average: all 11.71 0.00 2.46 3.71 0.07 82.05 17:07:20 Average: 0 8.82 0.00 2.38 3.61 0.06 85.13 17:07:20 Average: 1 12.50 0.00 2.58 7.09 0.07 77.76 17:07:20 Average: 2 10.47 0.00 2.49 4.93 0.07 82.05 17:07:20 Average: 3 10.71 0.00 2.15 2.18 0.06 84.90 17:07:20 Average: 4 17.47 0.00 2.68 3.48 0.06 76.31 17:07:20 Average: 5 12.23 0.00 2.42 0.67 0.07 84.61 17:07:20 Average: 6 10.60 0.00 2.41 2.11 0.06 84.82 17:07:20 Average: 7 10.90 0.00 2.58 5.60 0.07 80.85 17:07:20 17:07:20 17:07:20