17:02:03 Started by timer 17:02:03 Running as SYSTEM 17:02:03 [EnvInject] - Loading node environment variables. 17:02:03 Building remotely on prd-ubuntu1804-docker-8c-8g-20057 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-pap-newdelhi-project-csit-pap 17:02:03 [ssh-agent] Looking for ssh-agent implementation... 17:02:03 [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) 17:02:03 $ ssh-agent 17:02:03 SSH_AUTH_SOCK=/tmp/ssh-inJE9qC67A9k/agent.2034 17:02:03 SSH_AGENT_PID=2036 17:02:03 [ssh-agent] Started. 17:02:03 Running ssh-add (command line suppressed) 17:02:03 Identity added: /w/workspace/policy-pap-newdelhi-project-csit-pap@tmp/private_key_1729300640932115786.key (/w/workspace/policy-pap-newdelhi-project-csit-pap@tmp/private_key_1729300640932115786.key) 17:02:03 [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) 17:02:03 The recommended git tool is: NONE 17:02:04 using credential onap-jenkins-ssh 17:02:04 Wiping out workspace first. 17:02:04 Cloning the remote Git repository 17:02:05 Cloning repository git://cloud.onap.org/mirror/policy/docker.git 17:02:05 > git init /w/workspace/policy-pap-newdelhi-project-csit-pap # timeout=10 17:02:05 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git 17:02:05 > git --version # timeout=10 17:02:05 > git --version # 'git version 2.17.1' 17:02:05 using GIT_SSH to set credentials Gerrit user 17:02:05 Verifying host key using manually-configured host key entries 17:02:05 > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 17:02:05 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 17:02:05 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 17:02:05 Avoid second fetch 17:02:05 > git rev-parse refs/remotes/origin/newdelhi^{commit} # timeout=10 17:02:05 Checking out Revision a0de87f9d2d88fd7f870703053c99c7149d608ec (refs/remotes/origin/newdelhi) 17:02:05 > git config core.sparsecheckout # timeout=10 17:02:05 > git checkout -f a0de87f9d2d88fd7f870703053c99c7149d608ec # timeout=30 17:02:06 Commit message: "Fix timeout in pap CSIT for auditing undeploys" 17:02:06 > git rev-list --no-walk a0de87f9d2d88fd7f870703053c99c7149d608ec # timeout=10 17:02:09 provisioning config files... 17:02:09 copy managed file [npmrc] to file:/home/jenkins/.npmrc 17:02:09 copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf 17:02:09 [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins5639650057498603698.sh 17:02:09 ---> python-tools-install.sh 17:02:09 Setup pyenv: 17:02:09 * system (set by /opt/pyenv/version) 17:02:09 * 3.8.13 (set by /opt/pyenv/version) 17:02:09 * 3.9.13 (set by /opt/pyenv/version) 17:02:09 * 3.10.6 (set by /opt/pyenv/version) 17:02:13 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-60gH 17:02:13 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv 17:02:18 lf-activate-venv(): INFO: Installing: lftools 17:02:43 lf-activate-venv(): INFO: Adding /tmp/venv-60gH/bin to PATH 17:02:43 Generating Requirements File 17:03:04 Python 3.10.6 17:03:05 pip 25.1.1 from /tmp/venv-60gH/lib/python3.10/site-packages/pip (python 3.10) 17:03:05 appdirs==1.4.4 17:03:05 argcomplete==3.6.2 17:03:05 aspy.yaml==1.3.0 17:03:05 attrs==25.3.0 17:03:05 autopage==0.5.2 17:03:05 beautifulsoup4==4.13.4 17:03:05 boto3==1.38.33 17:03:05 botocore==1.38.33 17:03:05 bs4==0.0.2 17:03:05 cachetools==5.5.2 17:03:05 certifi==2025.4.26 17:03:05 cffi==1.17.1 17:03:05 cfgv==3.4.0 17:03:05 chardet==5.2.0 17:03:05 charset-normalizer==3.4.2 17:03:05 click==8.2.1 17:03:05 cliff==4.10.0 17:03:05 cmd2==2.6.1 17:03:05 cryptography==3.3.2 17:03:05 debtcollector==3.0.0 17:03:05 decorator==5.2.1 17:03:05 defusedxml==0.7.1 17:03:05 Deprecated==1.2.18 17:03:05 distlib==0.3.9 17:03:05 dnspython==2.7.0 17:03:05 docker==7.1.0 17:03:05 dogpile.cache==1.4.0 17:03:05 durationpy==0.10 17:03:05 email_validator==2.2.0 17:03:05 filelock==3.18.0 17:03:05 future==1.0.0 17:03:05 gitdb==4.0.12 17:03:05 GitPython==3.1.44 17:03:05 google-auth==2.40.3 17:03:05 httplib2==0.22.0 17:03:05 identify==2.6.12 17:03:05 idna==3.10 17:03:05 importlib-resources==1.5.0 17:03:05 iso8601==2.1.0 17:03:05 Jinja2==3.1.6 17:03:05 jmespath==1.0.1 17:03:05 jsonpatch==1.33 17:03:05 jsonpointer==3.0.0 17:03:05 jsonschema==4.24.0 17:03:05 jsonschema-specifications==2025.4.1 17:03:05 keystoneauth1==5.11.0 17:03:05 kubernetes==33.1.0 17:03:05 lftools==0.37.13 17:03:05 lxml==5.4.0 17:03:05 MarkupSafe==3.0.2 17:03:05 msgpack==1.1.0 17:03:05 multi_key_dict==2.0.3 17:03:05 munch==4.0.0 17:03:05 netaddr==1.3.0 17:03:05 niet==1.4.2 17:03:05 nodeenv==1.9.1 17:03:05 oauth2client==4.1.3 17:03:05 oauthlib==3.2.2 17:03:05 openstacksdk==4.6.0 17:03:05 os-client-config==2.1.0 17:03:05 os-service-types==1.7.0 17:03:05 osc-lib==4.0.2 17:03:05 oslo.config==9.8.0 17:03:05 oslo.context==6.0.0 17:03:05 oslo.i18n==6.5.1 17:03:05 oslo.log==7.1.0 17:03:05 oslo.serialization==5.7.0 17:03:05 oslo.utils==9.0.0 17:03:05 packaging==25.0 17:03:05 pbr==6.1.1 17:03:05 platformdirs==4.3.8 17:03:05 prettytable==3.16.0 17:03:05 psutil==7.0.0 17:03:05 pyasn1==0.6.1 17:03:05 pyasn1_modules==0.4.2 17:03:05 pycparser==2.22 17:03:05 pygerrit2==2.0.15 17:03:05 PyGithub==2.6.1 17:03:05 PyJWT==2.10.1 17:03:05 PyNaCl==1.5.0 17:03:05 pyparsing==2.4.7 17:03:05 pyperclip==1.9.0 17:03:05 pyrsistent==0.20.0 17:03:05 python-cinderclient==9.7.0 17:03:05 python-dateutil==2.9.0.post0 17:03:05 python-heatclient==4.2.0 17:03:05 python-jenkins==1.8.2 17:03:05 python-keystoneclient==5.6.0 17:03:05 python-magnumclient==4.8.1 17:03:05 python-openstackclient==8.1.0 17:03:05 python-swiftclient==4.8.0 17:03:05 PyYAML==6.0.2 17:03:05 referencing==0.36.2 17:03:05 requests==2.32.4 17:03:05 requests-oauthlib==2.0.0 17:03:05 requestsexceptions==1.4.0 17:03:05 rfc3986==2.0.0 17:03:05 rpds-py==0.25.1 17:03:05 rsa==4.9.1 17:03:05 ruamel.yaml==0.18.14 17:03:05 ruamel.yaml.clib==0.2.12 17:03:05 s3transfer==0.13.0 17:03:05 simplejson==3.20.1 17:03:05 six==1.17.0 17:03:05 smmap==5.0.2 17:03:05 soupsieve==2.7 17:03:05 stevedore==5.4.1 17:03:05 tabulate==0.9.0 17:03:05 toml==0.10.2 17:03:05 tomlkit==0.13.3 17:03:05 tqdm==4.67.1 17:03:05 typing_extensions==4.14.0 17:03:05 tzdata==2025.2 17:03:05 urllib3==1.26.20 17:03:05 virtualenv==20.31.2 17:03:05 wcwidth==0.2.13 17:03:05 websocket-client==1.8.0 17:03:05 wrapt==1.17.2 17:03:05 xdg==6.0.0 17:03:05 xmltodict==0.14.2 17:03:05 yq==3.4.3 17:03:05 [EnvInject] - Injecting environment variables from a build step. 17:03:05 [EnvInject] - Injecting as environment variables the properties content 17:03:05 SET_JDK_VERSION=openjdk17 17:03:05 GIT_URL="git://cloud.onap.org/mirror" 17:03:05 17:03:05 [EnvInject] - Variables injected successfully. 17:03:05 [policy-pap-newdelhi-project-csit-pap] $ /bin/sh /tmp/jenkins11225159999424973666.sh 17:03:05 ---> update-java-alternatives.sh 17:03:05 ---> Updating Java version 17:03:05 ---> Ubuntu/Debian system detected 17:03:05 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode 17:03:05 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode 17:03:05 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode 17:03:06 openjdk version "17.0.4" 2022-07-19 17:03:06 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) 17:03:06 OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) 17:03:06 JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 17:03:06 [EnvInject] - Injecting environment variables from a build step. 17:03:06 [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' 17:03:06 [EnvInject] - Variables injected successfully. 17:03:06 [policy-pap-newdelhi-project-csit-pap] $ /bin/sh -xe /tmp/jenkins17662326185589472977.sh 17:03:06 + /w/workspace/policy-pap-newdelhi-project-csit-pap/csit/run-project-csit.sh pap 17:03:06 WARNING! Using --password via the CLI is insecure. Use --password-stdin. 17:03:06 WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. 17:03:06 Configure a credential helper to remove this warning. See 17:03:06 https://docs.docker.com/engine/reference/commandline/login/#credentials-store 17:03:06 17:03:06 Login Succeeded 17:03:06 docker: 'compose' is not a docker command. 17:03:06 See 'docker --help' 17:03:06 Docker Compose Plugin not installed. Installing now... 17:03:06 % Total % Received % Xferd Average Speed Time Time Time Current 17:03:06 Dload Upload Total Spent Left Speed 17:03:06 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 17:03:07 100 60.0M 100 60.0M 0 0 71.3M 0 --:--:-- --:--:-- --:--:-- 71.3M 17:03:07 Setting project configuration for: pap 17:03:07 Configuring docker compose... 17:03:10 Starting apex-pdp application with Grafana 17:03:10 policy-db-migrator Pulling 17:03:10 api Pulling 17:03:10 grafana Pulling 17:03:10 pap Pulling 17:03:10 zookeeper Pulling 17:03:10 simulator Pulling 17:03:10 kafka Pulling 17:03:10 mariadb Pulling 17:03:10 prometheus Pulling 17:03:10 apex-pdp Pulling 17:03:10 31e352740f53 Pulling fs layer 17:03:10 ecc4de98d537 Pulling fs layer 17:03:10 1fe734c5fee3 Pulling fs layer 17:03:10 c8e6f0452a8e Pulling fs layer 17:03:10 0143f8517101 Pulling fs layer 17:03:10 ee69cc1a77e2 Pulling fs layer 17:03:10 81667b400b57 Pulling fs layer 17:03:10 ec3b6d0cc414 Pulling fs layer 17:03:10 a8d3998ab21c Pulling fs layer 17:03:10 89d6e2ec6372 Pulling fs layer 17:03:10 80096f8bb25e Pulling fs layer 17:03:10 cbd359ebc87d Pulling fs layer 17:03:10 c8e6f0452a8e Waiting 17:03:10 a8d3998ab21c Waiting 17:03:10 ee69cc1a77e2 Waiting 17:03:10 80096f8bb25e Waiting 17:03:10 cbd359ebc87d Waiting 17:03:10 ec3b6d0cc414 Waiting 17:03:10 81667b400b57 Waiting 17:03:10 0143f8517101 Waiting 17:03:10 31e352740f53 Pulling fs layer 17:03:10 ecc4de98d537 Pulling fs layer 17:03:10 665dfb3388a1 Pulling fs layer 17:03:10 f270a5fd7930 Pulling fs layer 17:03:10 9038eaba24f8 Pulling fs layer 17:03:10 04a7796b82ca Pulling fs layer 17:03:10 9038eaba24f8 Waiting 17:03:10 f270a5fd7930 Waiting 17:03:10 665dfb3388a1 Waiting 17:03:10 04a7796b82ca Waiting 17:03:10 31e352740f53 Pulling fs layer 17:03:10 ad1782e4d1ef Pulling fs layer 17:03:10 bc8105c6553b Pulling fs layer 17:03:10 929241f867bb Pulling fs layer 17:03:10 37728a7352e6 Pulling fs layer 17:03:10 3f40c7aa46a6 Pulling fs layer 17:03:10 353af139d39e Pulling fs layer 17:03:10 929241f867bb Waiting 17:03:10 bc8105c6553b Waiting 17:03:10 3f40c7aa46a6 Waiting 17:03:10 31e352740f53 Downloading [> ] 48.06kB/3.398MB 17:03:10 31e352740f53 Downloading [> ] 48.06kB/3.398MB 17:03:10 31e352740f53 Downloading [> ] 48.06kB/3.398MB 17:03:10 1fe734c5fee3 Downloading [> ] 343kB/32.94MB 17:03:10 ecc4de98d537 Downloading [> ] 539.6kB/73.93MB 17:03:10 ecc4de98d537 Downloading [> ] 539.6kB/73.93MB 17:03:10 31e352740f53 Pulling fs layer 17:03:10 ecc4de98d537 Pulling fs layer 17:03:10 145e9fcd3938 Pulling fs layer 17:03:10 4be774fd73e2 Pulling fs layer 17:03:10 71f834c33815 Pulling fs layer 17:03:10 a40760cd2625 Pulling fs layer 17:03:10 114f99593bd8 Pulling fs layer 17:03:10 31e352740f53 Downloading [> ] 48.06kB/3.398MB 17:03:10 ecc4de98d537 Downloading [> ] 539.6kB/73.93MB 17:03:10 145e9fcd3938 Waiting 17:03:10 4be774fd73e2 Waiting 17:03:10 a40760cd2625 Waiting 17:03:10 71f834c33815 Waiting 17:03:10 114f99593bd8 Waiting 17:03:10 31e352740f53 Pulling fs layer 17:03:10 ecc4de98d537 Pulling fs layer 17:03:10 bda0b253c68f Pulling fs layer 17:03:10 b9357b55a7a5 Pulling fs layer 17:03:10 ecc4de98d537 Downloading [> ] 539.6kB/73.93MB 17:03:10 4c3047628e17 Pulling fs layer 17:03:10 6cf350721225 Pulling fs layer 17:03:10 de723b4c7ed9 Pulling fs layer 17:03:10 31e352740f53 Downloading [> ] 48.06kB/3.398MB 17:03:10 bda0b253c68f Waiting 17:03:10 b9357b55a7a5 Waiting 17:03:10 4c3047628e17 Waiting 17:03:10 6cf350721225 Waiting 17:03:10 de723b4c7ed9 Waiting 17:03:10 31e352740f53 Verifying Checksum 17:03:10 31e352740f53 Download complete 17:03:10 31e352740f53 Download complete 17:03:10 31e352740f53 Download complete 17:03:10 31e352740f53 Download complete 17:03:10 31e352740f53 Download complete 17:03:10 31e352740f53 Extracting [> ] 65.54kB/3.398MB 17:03:10 31e352740f53 Extracting [> ] 65.54kB/3.398MB 17:03:10 31e352740f53 Extracting [> ] 65.54kB/3.398MB 17:03:10 31e352740f53 Extracting [> ] 65.54kB/3.398MB 17:03:10 31e352740f53 Extracting [> ] 65.54kB/3.398MB 17:03:10 9fa9226be034 Pulling fs layer 17:03:10 1617e25568b2 Pulling fs layer 17:03:10 6ac0e4adf315 Pulling fs layer 17:03:10 f3b09c502777 Pulling fs layer 17:03:10 408012a7b118 Pulling fs layer 17:03:10 44986281b8b9 Pulling fs layer 17:03:10 bf70c5107ab5 Pulling fs layer 17:03:10 1ccde423731d Pulling fs layer 17:03:10 7221d93db8a9 Pulling fs layer 17:03:10 7df673c7455d Pulling fs layer 17:03:10 9fa9226be034 Waiting 17:03:10 1617e25568b2 Waiting 17:03:10 6ac0e4adf315 Waiting 17:03:10 f3b09c502777 Waiting 17:03:10 408012a7b118 Waiting 17:03:10 44986281b8b9 Waiting 17:03:10 bf70c5107ab5 Waiting 17:03:10 1ccde423731d Waiting 17:03:10 7df673c7455d Waiting 17:03:10 7221d93db8a9 Waiting 17:03:10 f18232174bc9 Pulling fs layer 17:03:10 65babbe3dfe5 Pulling fs layer 17:03:10 651b0ba49b07 Pulling fs layer 17:03:10 d953cde4314b Pulling fs layer 17:03:10 aecd4cb03450 Pulling fs layer 17:03:10 13fa68ca8757 Pulling fs layer 17:03:10 f836d47fdc4d Pulling fs layer 17:03:10 8b5292c940e1 Pulling fs layer 17:03:10 454a4350d439 Pulling fs layer 17:03:10 9a8c18aee5ea Pulling fs layer 17:03:10 651b0ba49b07 Waiting 17:03:10 13fa68ca8757 Waiting 17:03:10 8b5292c940e1 Waiting 17:03:10 454a4350d439 Waiting 17:03:10 f836d47fdc4d Waiting 17:03:10 f18232174bc9 Waiting 17:03:10 65babbe3dfe5 Waiting 17:03:10 9a8c18aee5ea Waiting 17:03:10 aecd4cb03450 Waiting 17:03:10 c8e6f0452a8e Download complete 17:03:10 1fe734c5fee3 Downloading [=========> ] 6.192MB/32.94MB 17:03:10 0143f8517101 Downloading [============================> ] 3.003kB/5.324kB 17:03:10 0143f8517101 Downloading [==================================================>] 5.324kB/5.324kB 17:03:10 0143f8517101 Verifying Checksum 17:03:10 0143f8517101 Download complete 17:03:10 10ac4908093d Pulling fs layer 17:03:10 44779101e748 Pulling fs layer 17:03:10 a721db3e3f3d Pulling fs layer 17:03:10 1850a929b84a Pulling fs layer 17:03:10 397a918c7da3 Pulling fs layer 17:03:10 806be17e856d Pulling fs layer 17:03:10 634de6c90876 Pulling fs layer 17:03:10 cd00854cfb1a Pulling fs layer 17:03:10 10ac4908093d Waiting 17:03:10 44779101e748 Waiting 17:03:10 a721db3e3f3d Waiting 17:03:10 1850a929b84a Waiting 17:03:10 397a918c7da3 Waiting 17:03:10 806be17e856d Waiting 17:03:10 634de6c90876 Waiting 17:03:10 cd00854cfb1a Waiting 17:03:10 ecc4de98d537 Downloading [=====> ] 7.568MB/73.93MB 17:03:10 ecc4de98d537 Downloading [=====> ] 7.568MB/73.93MB 17:03:10 ecc4de98d537 Downloading [=====> ] 7.568MB/73.93MB 17:03:10 ecc4de98d537 Downloading [=====> ] 7.568MB/73.93MB 17:03:10 ee69cc1a77e2 Downloading [============================> ] 3.003kB/5.312kB 17:03:10 ee69cc1a77e2 Downloading [==================================================>] 5.312kB/5.312kB 17:03:10 ee69cc1a77e2 Download complete 17:03:10 81667b400b57 Downloading [==================================================>] 1.034kB/1.034kB 17:03:10 81667b400b57 Verifying Checksum 17:03:10 81667b400b57 Download complete 17:03:10 ec3b6d0cc414 Downloading [==================================================>] 1.036kB/1.036kB 17:03:10 ec3b6d0cc414 Download complete 17:03:10 31e352740f53 Extracting [=====> ] 393.2kB/3.398MB 17:03:10 31e352740f53 Extracting [=====> ] 393.2kB/3.398MB 17:03:10 31e352740f53 Extracting [=====> ] 393.2kB/3.398MB 17:03:10 31e352740f53 Extracting [=====> ] 393.2kB/3.398MB 17:03:10 31e352740f53 Extracting [=====> ] 393.2kB/3.398MB 17:03:10 1fe734c5fee3 Downloading [=================> ] 11.7MB/32.94MB 17:03:10 a8d3998ab21c Downloading [==========> ] 3.002kB/13.9kB 17:03:10 a8d3998ab21c Download complete 17:03:10 89d6e2ec6372 Downloading [==========> ] 3.002kB/13.79kB 17:03:10 89d6e2ec6372 Download complete 17:03:10 ecc4de98d537 Downloading [=========> ] 13.52MB/73.93MB 17:03:10 ecc4de98d537 Downloading [=========> ] 13.52MB/73.93MB 17:03:10 ecc4de98d537 Downloading [=========> ] 13.52MB/73.93MB 17:03:10 ecc4de98d537 Downloading [=========> ] 13.52MB/73.93MB 17:03:10 80096f8bb25e Downloading [==================================================>] 2.238kB/2.238kB 17:03:10 80096f8bb25e Download complete 17:03:10 cbd359ebc87d Downloading [==================================================>] 2.23kB/2.23kB 17:03:10 cbd359ebc87d Verifying Checksum 17:03:10 cbd359ebc87d Download complete 17:03:10 665dfb3388a1 Downloading [==================================================>] 303B/303B 17:03:10 665dfb3388a1 Download complete 17:03:10 31e352740f53 Extracting [=============================================> ] 3.08MB/3.398MB 17:03:10 31e352740f53 Extracting [=============================================> ] 3.08MB/3.398MB 17:03:10 31e352740f53 Extracting [=============================================> ] 3.08MB/3.398MB 17:03:10 31e352740f53 Extracting [=============================================> ] 3.08MB/3.398MB 17:03:10 1fe734c5fee3 Downloading [=============================> ] 19.27MB/32.94MB 17:03:10 31e352740f53 Extracting [=============================================> ] 3.08MB/3.398MB 17:03:10 f270a5fd7930 Downloading [> ] 539.6kB/159.1MB 17:03:10 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 17:03:10 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 17:03:10 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 17:03:10 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 17:03:10 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 17:03:10 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 17:03:10 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 17:03:10 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 17:03:10 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 17:03:10 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 17:03:10 ecc4de98d537 Downloading [===============> ] 23.25MB/73.93MB 17:03:10 ecc4de98d537 Downloading [===============> ] 23.25MB/73.93MB 17:03:10 ecc4de98d537 Downloading [===============> ] 23.25MB/73.93MB 17:03:10 ecc4de98d537 Downloading [===============> ] 23.25MB/73.93MB 17:03:10 eca0188f477e Pulling fs layer 17:03:10 e444bcd4d577 Pulling fs layer 17:03:10 eabd8714fec9 Pulling fs layer 17:03:10 45fd2fec8a19 Pulling fs layer 17:03:10 8f10199ed94b Pulling fs layer 17:03:10 f963a77d2726 Pulling fs layer 17:03:10 f3a82e9f1761 Pulling fs layer 17:03:10 79161a3f5362 Pulling fs layer 17:03:10 9c266ba63f51 Pulling fs layer 17:03:10 2e8a7df9c2ee Pulling fs layer 17:03:10 10f05dd8b1db Pulling fs layer 17:03:10 eca0188f477e Waiting 17:03:10 41dac8b43ba6 Pulling fs layer 17:03:10 f3a82e9f1761 Waiting 17:03:10 71a9f6a9ab4d Pulling fs layer 17:03:10 c81b87c3efcc Pulling fs layer 17:03:10 5ee96432c7eb Pulling fs layer 17:03:10 e444bcd4d577 Waiting 17:03:10 79161a3f5362 Waiting 17:03:10 9c266ba63f51 Waiting 17:03:10 8f10199ed94b Waiting 17:03:10 45fd2fec8a19 Waiting 17:03:10 2e8a7df9c2ee Waiting 17:03:10 f963a77d2726 Waiting 17:03:10 10f05dd8b1db Waiting 17:03:10 c81b87c3efcc Waiting 17:03:10 5ee96432c7eb Waiting 17:03:10 41dac8b43ba6 Waiting 17:03:10 71a9f6a9ab4d Waiting 17:03:10 31e352740f53 Pull complete 17:03:10 31e352740f53 Pull complete 17:03:10 31e352740f53 Pull complete 17:03:10 31e352740f53 Pull complete 17:03:10 31e352740f53 Pull complete 17:03:10 1fe734c5fee3 Downloading [=============================================> ] 30.28MB/32.94MB 17:03:11 f270a5fd7930 Downloading [==> ] 7.028MB/159.1MB 17:03:11 1fe734c5fee3 Verifying Checksum 17:03:11 1fe734c5fee3 Download complete 17:03:11 ecc4de98d537 Downloading [=======================> ] 35.14MB/73.93MB 17:03:11 ecc4de98d537 Downloading [=======================> ] 35.14MB/73.93MB 17:03:11 ecc4de98d537 Downloading [=======================> ] 35.14MB/73.93MB 17:03:11 ecc4de98d537 Downloading [=======================> ] 35.14MB/73.93MB 17:03:11 9038eaba24f8 Downloading [==================================================>] 1.153kB/1.153kB 17:03:11 9038eaba24f8 Download complete 17:03:11 eca0188f477e Pulling fs layer 17:03:11 e444bcd4d577 Pulling fs layer 17:03:11 eabd8714fec9 Pulling fs layer 17:03:11 45fd2fec8a19 Pulling fs layer 17:03:11 8f10199ed94b Pulling fs layer 17:03:11 f963a77d2726 Pulling fs layer 17:03:11 f3a82e9f1761 Pulling fs layer 17:03:11 79161a3f5362 Pulling fs layer 17:03:11 9c266ba63f51 Pulling fs layer 17:03:11 e444bcd4d577 Waiting 17:03:11 2e8a7df9c2ee Pulling fs layer 17:03:11 eca0188f477e Waiting 17:03:11 10f05dd8b1db Pulling fs layer 17:03:11 f963a77d2726 Waiting 17:03:11 eabd8714fec9 Waiting 17:03:11 45fd2fec8a19 Waiting 17:03:11 8f10199ed94b Waiting 17:03:11 41dac8b43ba6 Pulling fs layer 17:03:11 79161a3f5362 Waiting 17:03:11 71a9f6a9ab4d Pulling fs layer 17:03:11 2e8a7df9c2ee Waiting 17:03:11 da3ed5db7103 Pulling fs layer 17:03:11 9c266ba63f51 Waiting 17:03:11 f3a82e9f1761 Waiting 17:03:11 10f05dd8b1db Waiting 17:03:11 41dac8b43ba6 Waiting 17:03:11 71a9f6a9ab4d Waiting 17:03:11 c955f6e31a04 Pulling fs layer 17:03:11 da3ed5db7103 Waiting 17:03:11 c955f6e31a04 Waiting 17:03:11 04a7796b82ca Downloading [==================================================>] 1.127kB/1.127kB 17:03:11 04a7796b82ca Verifying Checksum 17:03:11 ad1782e4d1ef Downloading [> ] 539.6kB/180.4MB 17:03:11 f270a5fd7930 Downloading [====> ] 14.06MB/159.1MB 17:03:11 ecc4de98d537 Downloading [===============================> ] 45.96MB/73.93MB 17:03:11 ecc4de98d537 Downloading [===============================> ] 45.96MB/73.93MB 17:03:11 ecc4de98d537 Downloading [===============================> ] 45.96MB/73.93MB 17:03:11 ecc4de98d537 Downloading [===============================> ] 45.96MB/73.93MB 17:03:11 ad1782e4d1ef Downloading [=> ] 5.406MB/180.4MB 17:03:11 f270a5fd7930 Downloading [=======> ] 22.71MB/159.1MB 17:03:11 ecc4de98d537 Downloading [====================================> ] 54.07MB/73.93MB 17:03:11 ecc4de98d537 Downloading [====================================> ] 54.07MB/73.93MB 17:03:11 ecc4de98d537 Downloading [====================================> ] 54.07MB/73.93MB 17:03:11 ecc4de98d537 Downloading [====================================> ] 54.07MB/73.93MB 17:03:11 f270a5fd7930 Downloading [===========> ] 35.68MB/159.1MB 17:03:11 ecc4de98d537 Downloading [===========================================> ] 64.34MB/73.93MB 17:03:11 ecc4de98d537 Downloading [===========================================> ] 64.34MB/73.93MB 17:03:11 ecc4de98d537 Downloading [===========================================> ] 64.34MB/73.93MB 17:03:11 ecc4de98d537 Downloading [===========================================> ] 64.34MB/73.93MB 17:03:11 f270a5fd7930 Downloading [===============> ] 48.12MB/159.1MB 17:03:11 ecc4de98d537 Verifying Checksum 17:03:11 ecc4de98d537 Verifying Checksum 17:03:11 ecc4de98d537 Download complete 17:03:11 ecc4de98d537 Download complete 17:03:11 ecc4de98d537 Download complete 17:03:11 ecc4de98d537 Download complete 17:03:11 f270a5fd7930 Downloading [==================> ] 59.47MB/159.1MB 17:03:11 ecc4de98d537 Extracting [> ] 557.1kB/73.93MB 17:03:11 ecc4de98d537 Extracting [> ] 557.1kB/73.93MB 17:03:11 ecc4de98d537 Extracting [> ] 557.1kB/73.93MB 17:03:11 ecc4de98d537 Extracting [> ] 557.1kB/73.93MB 17:03:11 ecc4de98d537 Extracting [===> ] 4.456MB/73.93MB 17:03:11 ecc4de98d537 Extracting [===> ] 4.456MB/73.93MB 17:03:11 ecc4de98d537 Extracting [===> ] 4.456MB/73.93MB 17:03:11 ecc4de98d537 Extracting [===> ] 4.456MB/73.93MB 17:03:11 ecc4de98d537 Extracting [=======> ] 10.58MB/73.93MB 17:03:11 ecc4de98d537 Extracting [=======> ] 10.58MB/73.93MB 17:03:11 ecc4de98d537 Extracting [=======> ] 10.58MB/73.93MB 17:03:11 ecc4de98d537 Extracting [=======> ] 10.58MB/73.93MB 17:03:11 ecc4de98d537 Extracting [==========> ] 16.15MB/73.93MB 17:03:11 ecc4de98d537 Extracting [==========> ] 16.15MB/73.93MB 17:03:11 ecc4de98d537 Extracting [==========> ] 16.15MB/73.93MB 17:03:11 ecc4de98d537 Extracting [==========> ] 16.15MB/73.93MB 17:03:12 ecc4de98d537 Extracting [==============> ] 21.17MB/73.93MB 17:03:12 ecc4de98d537 Extracting [==============> ] 21.17MB/73.93MB 17:03:12 ecc4de98d537 Extracting [==============> ] 21.17MB/73.93MB 17:03:12 ecc4de98d537 Extracting [==============> ] 21.17MB/73.93MB 17:03:12 ecc4de98d537 Extracting [=================> ] 25.62MB/73.93MB 17:03:12 ecc4de98d537 Extracting [=================> ] 25.62MB/73.93MB 17:03:12 ecc4de98d537 Extracting [=================> ] 25.62MB/73.93MB 17:03:12 ecc4de98d537 Extracting [=================> ] 25.62MB/73.93MB 17:03:12 ecc4de98d537 Extracting [=====================> ] 31.75MB/73.93MB 17:03:12 ecc4de98d537 Extracting [=====================> ] 31.75MB/73.93MB 17:03:12 ecc4de98d537 Extracting [=====================> ] 31.75MB/73.93MB 17:03:12 ecc4de98d537 Extracting [=====================> ] 31.75MB/73.93MB 17:03:12 ecc4de98d537 Extracting [=========================> ] 37.32MB/73.93MB 17:03:12 ecc4de98d537 Extracting [=========================> ] 37.32MB/73.93MB 17:03:12 ecc4de98d537 Extracting [=========================> ] 37.32MB/73.93MB 17:03:12 ecc4de98d537 Extracting [=========================> ] 37.32MB/73.93MB 17:03:12 ecc4de98d537 Extracting [=============================> ] 42.89MB/73.93MB 17:03:12 ecc4de98d537 Extracting [=============================> ] 42.89MB/73.93MB 17:03:12 ecc4de98d537 Extracting [=============================> ] 42.89MB/73.93MB 17:03:12 ecc4de98d537 Extracting [=============================> ] 42.89MB/73.93MB 17:03:12 ecc4de98d537 Extracting [================================> ] 47.91MB/73.93MB 17:03:12 ecc4de98d537 Extracting [================================> ] 47.91MB/73.93MB 17:03:12 ecc4de98d537 Extracting [================================> ] 47.91MB/73.93MB 17:03:12 ecc4de98d537 Extracting [================================> ] 47.91MB/73.93MB 17:03:12 ecc4de98d537 Extracting [====================================> ] 53.48MB/73.93MB 17:03:12 ecc4de98d537 Extracting [====================================> ] 53.48MB/73.93MB 17:03:12 ecc4de98d537 Extracting [====================================> ] 53.48MB/73.93MB 17:03:12 ecc4de98d537 Extracting [====================================> ] 53.48MB/73.93MB 17:03:12 ecc4de98d537 Extracting [=======================================> ] 57.93MB/73.93MB 17:03:12 ecc4de98d537 Extracting [=======================================> ] 57.93MB/73.93MB 17:03:12 ecc4de98d537 Extracting [=======================================> ] 57.93MB/73.93MB 17:03:12 ecc4de98d537 Extracting [=======================================> ] 57.93MB/73.93MB 17:03:12 ecc4de98d537 Extracting [==========================================> ] 62.39MB/73.93MB 17:03:12 ecc4de98d537 Extracting [==========================================> ] 62.39MB/73.93MB 17:03:12 ecc4de98d537 Extracting [==========================================> ] 62.39MB/73.93MB 17:03:12 ecc4de98d537 Extracting [==========================================> ] 62.39MB/73.93MB 17:03:12 ecc4de98d537 Extracting [==============================================> ] 69.07MB/73.93MB 17:03:12 ecc4de98d537 Extracting [==============================================> ] 69.07MB/73.93MB 17:03:12 ecc4de98d537 Extracting [==============================================> ] 69.07MB/73.93MB 17:03:12 ecc4de98d537 Extracting [==============================================> ] 69.07MB/73.93MB 17:03:13 ecc4de98d537 Extracting [==================================================>] 73.93MB/73.93MB 17:03:13 ecc4de98d537 Extracting [==================================================>] 73.93MB/73.93MB 17:03:13 ecc4de98d537 Extracting [==================================================>] 73.93MB/73.93MB 17:03:13 ecc4de98d537 Extracting [==================================================>] 73.93MB/73.93MB 17:03:13 ecc4de98d537 Pull complete 17:03:13 ecc4de98d537 Pull complete 17:03:13 ecc4de98d537 Pull complete 17:03:13 ecc4de98d537 Pull complete 17:03:13 665dfb3388a1 Extracting [==================================================>] 303B/303B 17:03:13 665dfb3388a1 Extracting [==================================================>] 303B/303B 17:03:13 1fe734c5fee3 Extracting [> ] 360.4kB/32.94MB 17:03:13 665dfb3388a1 Pull complete 17:03:13 f270a5fd7930 Downloading [=====================> ] 68.12MB/159.1MB 17:03:13 1fe734c5fee3 Extracting [=====> ] 3.604MB/32.94MB 17:03:13 ad1782e4d1ef Downloading [==> ] 9.731MB/180.4MB 17:03:13 bc8105c6553b Downloading [=> ] 3.002kB/84.13kB 17:03:13 bc8105c6553b Downloading [==================================================>] 84.13kB/84.13kB 17:03:13 bc8105c6553b Verifying Checksum 17:03:13 bc8105c6553b Download complete 17:03:13 929241f867bb Downloading [==================================================>] 92B/92B 17:03:13 929241f867bb Verifying Checksum 17:03:13 929241f867bb Download complete 17:03:13 37728a7352e6 Downloading [==================================================>] 92B/92B 17:03:13 37728a7352e6 Download complete 17:03:13 f270a5fd7930 Downloading [=========================> ] 80.56MB/159.1MB 17:03:13 3f40c7aa46a6 Downloading [==================================================>] 302B/302B 17:03:13 3f40c7aa46a6 Verifying Checksum 17:03:13 3f40c7aa46a6 Download complete 17:03:13 1fe734c5fee3 Extracting [==========> ] 7.209MB/32.94MB 17:03:13 ad1782e4d1ef Downloading [====> ] 15.68MB/180.4MB 17:03:13 f270a5fd7930 Downloading [==============================> ] 95.7MB/159.1MB 17:03:13 1fe734c5fee3 Extracting [================> ] 10.81MB/32.94MB 17:03:13 353af139d39e Downloading [> ] 539.6kB/246.5MB 17:03:13 ad1782e4d1ef Downloading [=====> ] 21.09MB/180.4MB 17:03:13 1fe734c5fee3 Extracting [====================> ] 13.7MB/32.94MB 17:03:13 f270a5fd7930 Downloading [=================================> ] 105.4MB/159.1MB 17:03:13 ad1782e4d1ef Downloading [=======> ] 25.41MB/180.4MB 17:03:13 353af139d39e Downloading [> ] 1.621MB/246.5MB 17:03:13 1fe734c5fee3 Extracting [========================> ] 15.86MB/32.94MB 17:03:13 f270a5fd7930 Downloading [====================================> ] 116.8MB/159.1MB 17:03:13 ad1782e4d1ef Downloading [=========> ] 35.14MB/180.4MB 17:03:13 353af139d39e Downloading [> ] 4.324MB/246.5MB 17:03:13 ad1782e4d1ef Downloading [=============> ] 47.58MB/180.4MB 17:03:13 f270a5fd7930 Downloading [=========================================> ] 130.8MB/159.1MB 17:03:13 1fe734c5fee3 Extracting [==============================> ] 19.82MB/32.94MB 17:03:13 353af139d39e Downloading [=> ] 8.109MB/246.5MB 17:03:14 ad1782e4d1ef Downloading [================> ] 58.93MB/180.4MB 17:03:14 f270a5fd7930 Downloading [============================================> ] 141.7MB/159.1MB 17:03:14 1fe734c5fee3 Extracting [================================> ] 21.63MB/32.94MB 17:03:14 353af139d39e Downloading [==> ] 12.43MB/246.5MB 17:03:14 ad1782e4d1ef Downloading [====================> ] 73.53MB/180.4MB 17:03:14 f270a5fd7930 Downloading [===============================================> ] 151.4MB/159.1MB 17:03:14 353af139d39e Downloading [===> ] 16.76MB/246.5MB 17:03:14 1fe734c5fee3 Extracting [===================================> ] 23.43MB/32.94MB 17:03:14 f270a5fd7930 Verifying Checksum 17:03:14 f270a5fd7930 Download complete 17:03:14 ad1782e4d1ef Downloading [=======================> ] 83.8MB/180.4MB 17:03:14 145e9fcd3938 Downloading [==================================================>] 294B/294B 17:03:14 145e9fcd3938 Verifying Checksum 17:03:14 145e9fcd3938 Download complete 17:03:14 145e9fcd3938 Extracting [==================================================>] 294B/294B 17:03:14 145e9fcd3938 Extracting [==================================================>] 294B/294B 17:03:14 4be774fd73e2 Downloading [=> ] 3.001kB/127.4kB 17:03:14 4be774fd73e2 Verifying Checksum 17:03:14 4be774fd73e2 Download complete 17:03:14 71f834c33815 Downloading [==================================================>] 1.147kB/1.147kB 17:03:14 71f834c33815 Verifying Checksum 17:03:14 71f834c33815 Download complete 17:03:14 353af139d39e Downloading [====> ] 22.17MB/246.5MB 17:03:14 ad1782e4d1ef Downloading [=========================> ] 92.45MB/180.4MB 17:03:14 1fe734c5fee3 Extracting [====================================> ] 24.15MB/32.94MB 17:03:14 f270a5fd7930 Extracting [> ] 557.1kB/159.1MB 17:03:14 353af139d39e Downloading [=====> ] 28.11MB/246.5MB 17:03:14 ad1782e4d1ef Downloading [============================> ] 102.2MB/180.4MB 17:03:14 1fe734c5fee3 Extracting [======================================> ] 25.23MB/32.94MB 17:03:14 f270a5fd7930 Extracting [==> ] 6.685MB/159.1MB 17:03:14 1fe734c5fee3 Extracting [=======================================> ] 25.95MB/32.94MB 17:03:14 ad1782e4d1ef Downloading [=============================> ] 108.1MB/180.4MB 17:03:14 f270a5fd7930 Extracting [===> ] 10.03MB/159.1MB 17:03:14 ad1782e4d1ef Downloading [================================> ] 117.9MB/180.4MB 17:03:14 f270a5fd7930 Extracting [=====> ] 17.27MB/159.1MB 17:03:14 1fe734c5fee3 Extracting [=========================================> ] 27.39MB/32.94MB 17:03:14 ad1782e4d1ef Downloading [===================================> ] 128.7MB/180.4MB 17:03:14 145e9fcd3938 Pull complete 17:03:14 4be774fd73e2 Extracting [============> ] 32.77kB/127.4kB 17:03:14 4be774fd73e2 Extracting [==================================================>] 127.4kB/127.4kB 17:03:14 4be774fd73e2 Extracting [==================================================>] 127.4kB/127.4kB 17:03:14 f270a5fd7930 Extracting [========> ] 25.62MB/159.1MB 17:03:14 1fe734c5fee3 Extracting [=============================================> ] 29.92MB/32.94MB 17:03:14 ad1782e4d1ef Downloading [======================================> ] 140.6MB/180.4MB 17:03:15 f270a5fd7930 Extracting [=========> ] 30.64MB/159.1MB 17:03:15 ad1782e4d1ef Downloading [==========================================> ] 151.9MB/180.4MB 17:03:15 4be774fd73e2 Pull complete 17:03:15 71f834c33815 Extracting [==================================================>] 1.147kB/1.147kB 17:03:15 71f834c33815 Extracting [==================================================>] 1.147kB/1.147kB 17:03:15 1fe734c5fee3 Extracting [===============================================> ] 31MB/32.94MB 17:03:15 f270a5fd7930 Extracting [============> ] 39.55MB/159.1MB 17:03:15 1fe734c5fee3 Extracting [==================================================>] 32.94MB/32.94MB 17:03:15 ad1782e4d1ef Downloading [==============================================> ] 166MB/180.4MB 17:03:15 1fe734c5fee3 Pull complete 17:03:15 f270a5fd7930 Extracting [===============> ] 50.69MB/159.1MB 17:03:15 c8e6f0452a8e Extracting [==================================================>] 1.076kB/1.076kB 17:03:15 c8e6f0452a8e Extracting [==================================================>] 1.076kB/1.076kB 17:03:15 71f834c33815 Pull complete 17:03:15 ad1782e4d1ef Verifying Checksum 17:03:15 ad1782e4d1ef Download complete 17:03:15 f270a5fd7930 Extracting [===================> ] 61.83MB/159.1MB 17:03:15 353af139d39e Downloading [======> ] 31.9MB/246.5MB 17:03:15 a40760cd2625 Downloading [> ] 539.6kB/84.46MB 17:03:15 114f99593bd8 Downloading [==================================================>] 1.119kB/1.119kB 17:03:15 114f99593bd8 Verifying Checksum 17:03:15 114f99593bd8 Download complete 17:03:15 bda0b253c68f Downloading [==================================================>] 292B/292B 17:03:15 bda0b253c68f Download complete 17:03:15 bda0b253c68f Extracting [==================================================>] 292B/292B 17:03:15 bda0b253c68f Extracting [==================================================>] 292B/292B 17:03:15 c8e6f0452a8e Pull complete 17:03:15 0143f8517101 Extracting [==================================================>] 5.324kB/5.324kB 17:03:15 0143f8517101 Extracting [==================================================>] 5.324kB/5.324kB 17:03:15 b9357b55a7a5 Downloading [=> ] 3.001kB/127kB 17:03:15 b9357b55a7a5 Download complete 17:03:15 4c3047628e17 Downloading [==================================================>] 1.324kB/1.324kB 17:03:15 4c3047628e17 Verifying Checksum 17:03:15 4c3047628e17 Download complete 17:03:15 ad1782e4d1ef Extracting [> ] 557.1kB/180.4MB 17:03:15 f270a5fd7930 Extracting [=====================> ] 69.63MB/159.1MB 17:03:15 353af139d39e Downloading [=========> ] 45.96MB/246.5MB 17:03:15 bda0b253c68f Pull complete 17:03:15 b9357b55a7a5 Extracting [============> ] 32.77kB/127kB 17:03:15 b9357b55a7a5 Extracting [==================================================>] 127kB/127kB 17:03:15 ad1782e4d1ef Extracting [=> ] 4.456MB/180.4MB 17:03:15 0143f8517101 Pull complete 17:03:15 f270a5fd7930 Extracting [========================> ] 77.43MB/159.1MB 17:03:15 ee69cc1a77e2 Extracting [==================================================>] 5.312kB/5.312kB 17:03:15 ee69cc1a77e2 Extracting [==================================================>] 5.312kB/5.312kB 17:03:15 b9357b55a7a5 Pull complete 17:03:15 4c3047628e17 Extracting [==================================================>] 1.324kB/1.324kB 17:03:15 4c3047628e17 Extracting [==================================================>] 1.324kB/1.324kB 17:03:15 ad1782e4d1ef Extracting [===> ] 12.26MB/180.4MB 17:03:15 f270a5fd7930 Extracting [===========================> ] 87.46MB/159.1MB 17:03:15 ee69cc1a77e2 Pull complete 17:03:15 81667b400b57 Extracting [==================================================>] 1.034kB/1.034kB 17:03:15 81667b400b57 Extracting [==================================================>] 1.034kB/1.034kB 17:03:15 ad1782e4d1ef Extracting [=====> ] 20.61MB/180.4MB 17:03:15 f270a5fd7930 Extracting [=============================> ] 93.03MB/159.1MB 17:03:15 4c3047628e17 Pull complete 17:03:15 ad1782e4d1ef Extracting [=========> ] 35.65MB/180.4MB 17:03:15 f270a5fd7930 Extracting [================================> ] 102.5MB/159.1MB 17:03:15 81667b400b57 Pull complete 17:03:15 ec3b6d0cc414 Extracting [==================================================>] 1.036kB/1.036kB 17:03:15 ec3b6d0cc414 Extracting [==================================================>] 1.036kB/1.036kB 17:03:15 ad1782e4d1ef Extracting [============> ] 45.12MB/180.4MB 17:03:15 f270a5fd7930 Extracting [===================================> ] 112MB/159.1MB 17:03:16 ec3b6d0cc414 Pull complete 17:03:16 a8d3998ab21c Extracting [==================================================>] 13.9kB/13.9kB 17:03:16 a8d3998ab21c Extracting [==================================================>] 13.9kB/13.9kB 17:03:16 ad1782e4d1ef Extracting [===============> ] 54.59MB/180.4MB 17:03:16 f270a5fd7930 Extracting [======================================> ] 123.7MB/159.1MB 17:03:16 ad1782e4d1ef Extracting [=================> ] 64.06MB/180.4MB 17:03:16 f270a5fd7930 Extracting [==========================================> ] 134.8MB/159.1MB 17:03:16 a8d3998ab21c Pull complete 17:03:16 89d6e2ec6372 Extracting [==================================================>] 13.79kB/13.79kB 17:03:16 89d6e2ec6372 Extracting [==================================================>] 13.79kB/13.79kB 17:03:16 ad1782e4d1ef Extracting [=====================> ] 76.32MB/180.4MB 17:03:16 f270a5fd7930 Extracting [=============================================> ] 145.4MB/159.1MB 17:03:16 ad1782e4d1ef Extracting [=======================> ] 85.23MB/180.4MB 17:03:16 f270a5fd7930 Extracting [================================================> ] 155.4MB/159.1MB 17:03:16 89d6e2ec6372 Pull complete 17:03:16 80096f8bb25e Extracting [==================================================>] 2.238kB/2.238kB 17:03:16 80096f8bb25e Extracting [==================================================>] 2.238kB/2.238kB 17:03:16 f270a5fd7930 Extracting [==================================================>] 159.1MB/159.1MB 17:03:16 f270a5fd7930 Pull complete 17:03:16 ad1782e4d1ef Extracting [========================> ] 89.13MB/180.4MB 17:03:16 9038eaba24f8 Extracting [==================================================>] 1.153kB/1.153kB 17:03:16 9038eaba24f8 Extracting [==================================================>] 1.153kB/1.153kB 17:03:16 80096f8bb25e Pull complete 17:03:16 cbd359ebc87d Extracting [==================================================>] 2.23kB/2.23kB 17:03:16 cbd359ebc87d Extracting [==================================================>] 2.23kB/2.23kB 17:03:16 ad1782e4d1ef Extracting [==========================> ] 94.14MB/180.4MB 17:03:16 9038eaba24f8 Pull complete 17:03:16 04a7796b82ca Extracting [==================================================>] 1.127kB/1.127kB 17:03:16 04a7796b82ca Extracting [==================================================>] 1.127kB/1.127kB 17:03:16 ad1782e4d1ef Extracting [===========================> ] 99.71MB/180.4MB 17:03:16 cbd359ebc87d Pull complete 17:03:16 04a7796b82ca Pull complete 17:03:16 policy-db-migrator Pulled 17:03:16 simulator Pulled 17:03:16 ad1782e4d1ef Extracting [=============================> ] 105.8MB/180.4MB 17:03:16 ad1782e4d1ef Extracting [===============================> ] 112MB/180.4MB 17:03:17 ad1782e4d1ef Extracting [================================> ] 118.7MB/180.4MB 17:03:17 ad1782e4d1ef Extracting [==================================> ] 124.2MB/180.4MB 17:03:17 ad1782e4d1ef Extracting [===================================> ] 129.8MB/180.4MB 17:03:17 ad1782e4d1ef Extracting [=====================================> ] 135.4MB/180.4MB 17:03:17 ad1782e4d1ef Extracting [======================================> ] 140.4MB/180.4MB 17:03:17 ad1782e4d1ef Extracting [========================================> ] 145.4MB/180.4MB 17:03:17 ad1782e4d1ef Extracting [==========================================> ] 152.1MB/180.4MB 17:03:17 ad1782e4d1ef Extracting [===========================================> ] 158.2MB/180.4MB 17:03:17 ad1782e4d1ef Extracting [=============================================> ] 163.2MB/180.4MB 17:03:17 ad1782e4d1ef Extracting [=============================================> ] 165.4MB/180.4MB 17:03:18 ad1782e4d1ef Extracting [===============================================> ] 170.5MB/180.4MB 17:03:18 ad1782e4d1ef Extracting [===============================================> ] 172.7MB/180.4MB 17:03:18 ad1782e4d1ef Extracting [================================================> ] 174.9MB/180.4MB 17:03:18 ad1782e4d1ef Extracting [=================================================> ] 177.1MB/180.4MB 17:03:18 ad1782e4d1ef Extracting [=================================================> ] 179.4MB/180.4MB 17:03:18 ad1782e4d1ef Extracting [==================================================>] 180.4MB/180.4MB 17:03:18 353af139d39e Downloading [===========> ] 55.69MB/246.5MB 17:03:18 a40760cd2625 Downloading [==> ] 4.865MB/84.46MB 17:03:18 ad1782e4d1ef Pull complete 17:03:18 bc8105c6553b Extracting [===================> ] 32.77kB/84.13kB 17:03:18 bc8105c6553b Extracting [==================================================>] 84.13kB/84.13kB 17:03:18 bc8105c6553b Extracting [==================================================>] 84.13kB/84.13kB 17:03:18 6cf350721225 Downloading [> ] 539.6kB/98.32MB 17:03:19 353af139d39e Downloading [==============> ] 69.2MB/246.5MB 17:03:19 a40760cd2625 Downloading [=====> ] 9.731MB/84.46MB 17:03:19 6cf350721225 Downloading [=> ] 2.162MB/98.32MB 17:03:19 353af139d39e Downloading [===============> ] 77.31MB/246.5MB 17:03:19 a40760cd2625 Downloading [=======> ] 12.98MB/84.46MB 17:03:19 6cf350721225 Downloading [=> ] 3.784MB/98.32MB 17:03:19 bc8105c6553b Pull complete 17:03:19 929241f867bb Extracting [==================================================>] 92B/92B 17:03:19 929241f867bb Extracting [==================================================>] 92B/92B 17:03:19 353af139d39e Downloading [=================> ] 87.59MB/246.5MB 17:03:19 a40760cd2625 Downloading [============> ] 20.54MB/84.46MB 17:03:19 6cf350721225 Downloading [===> ] 5.946MB/98.32MB 17:03:19 353af139d39e Downloading [====================> ] 101.1MB/246.5MB 17:03:19 929241f867bb Pull complete 17:03:19 37728a7352e6 Extracting [==================================================>] 92B/92B 17:03:19 37728a7352e6 Extracting [==================================================>] 92B/92B 17:03:19 a40760cd2625 Downloading [================> ] 27.03MB/84.46MB 17:03:19 6cf350721225 Downloading [====> ] 8.109MB/98.32MB 17:03:19 353af139d39e Downloading [=======================> ] 115.7MB/246.5MB 17:03:19 a40760cd2625 Downloading [===================> ] 32.98MB/84.46MB 17:03:19 37728a7352e6 Pull complete 17:03:19 3f40c7aa46a6 Extracting [==================================================>] 302B/302B 17:03:19 3f40c7aa46a6 Extracting [==================================================>] 302B/302B 17:03:19 6cf350721225 Downloading [=====> ] 10.27MB/98.32MB 17:03:19 353af139d39e Downloading [=========================> ] 128.1MB/246.5MB 17:03:19 a40760cd2625 Downloading [========================> ] 40.55MB/84.46MB 17:03:19 6cf350721225 Downloading [======> ] 12.43MB/98.32MB 17:03:19 353af139d39e Downloading [============================> ] 138.4MB/246.5MB 17:03:19 a40760cd2625 Downloading [===========================> ] 46.5MB/84.46MB 17:03:19 3f40c7aa46a6 Pull complete 17:03:19 6cf350721225 Downloading [=======> ] 15.14MB/98.32MB 17:03:19 353af139d39e Downloading [==============================> ] 152.5MB/246.5MB 17:03:19 a40760cd2625 Downloading [=================================> ] 56.77MB/84.46MB 17:03:19 6cf350721225 Downloading [===========> ] 22.17MB/98.32MB 17:03:19 353af139d39e Downloading [==================================> ] 170.3MB/246.5MB 17:03:19 a40760cd2625 Downloading [=========================================> ] 69.75MB/84.46MB 17:03:19 6cf350721225 Downloading [==============> ] 28.11MB/98.32MB 17:03:19 353af139d39e Downloading [=====================================> ] 185.4MB/246.5MB 17:03:19 a40760cd2625 Downloading [==============================================> ] 77.86MB/84.46MB 17:03:20 a40760cd2625 Download complete 17:03:20 de723b4c7ed9 Downloading [==================================================>] 1.297kB/1.297kB 17:03:20 de723b4c7ed9 Verifying Checksum 17:03:20 de723b4c7ed9 Download complete 17:03:20 6cf350721225 Downloading [===================> ] 37.85MB/98.32MB 17:03:20 353af139d39e Downloading [=======================================> ] 196.3MB/246.5MB 17:03:20 9fa9226be034 Downloading [> ] 15.3kB/783kB 17:03:20 9fa9226be034 Verifying Checksum 17:03:20 9fa9226be034 Extracting [==> ] 32.77kB/783kB 17:03:20 1617e25568b2 Downloading [=> ] 15.3kB/480.9kB 17:03:20 1617e25568b2 Verifying Checksum 17:03:20 1617e25568b2 Download complete 17:03:20 6cf350721225 Downloading [=========================> ] 49.74MB/98.32MB 17:03:20 353af139d39e Downloading [==========================================> ] 210.3MB/246.5MB 17:03:20 6ac0e4adf315 Downloading [> ] 539.6kB/62.07MB 17:03:20 9fa9226be034 Extracting [==================================================>] 783kB/783kB 17:03:20 9fa9226be034 Extracting [==================================================>] 783kB/783kB 17:03:20 a40760cd2625 Extracting [> ] 557.1kB/84.46MB 17:03:20 6cf350721225 Downloading [============================> ] 56.77MB/98.32MB 17:03:20 353af139d39e Downloading [=============================================> ] 223.3MB/246.5MB 17:03:20 6ac0e4adf315 Downloading [=====> ] 7.028MB/62.07MB 17:03:20 6cf350721225 Downloading [================================> ] 63.8MB/98.32MB 17:03:20 353af139d39e Downloading [===============================================> ] 233MB/246.5MB 17:03:20 a40760cd2625 Extracting [=======> ] 13.37MB/84.46MB 17:03:20 9fa9226be034 Pull complete 17:03:20 1617e25568b2 Extracting [===> ] 32.77kB/480.9kB 17:03:20 6cf350721225 Downloading [======================================> ] 75.15MB/98.32MB 17:03:20 353af139d39e Downloading [=================================================> ] 244.4MB/246.5MB 17:03:20 6ac0e4adf315 Downloading [============> ] 15.14MB/62.07MB 17:03:20 a40760cd2625 Extracting [==========> ] 18.38MB/84.46MB 17:03:20 353af139d39e Verifying Checksum 17:03:20 353af139d39e Download complete 17:03:20 1617e25568b2 Extracting [==================================> ] 327.7kB/480.9kB 17:03:20 f3b09c502777 Downloading [> ] 539.6kB/56.52MB 17:03:20 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB 17:03:20 6cf350721225 Downloading [============================================> ] 87.59MB/98.32MB 17:03:20 6ac0e4adf315 Downloading [=================> ] 22.17MB/62.07MB 17:03:20 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB 17:03:20 a40760cd2625 Extracting [=============> ] 22.84MB/84.46MB 17:03:20 6cf350721225 Verifying Checksum 17:03:20 6cf350721225 Download complete 17:03:20 f3b09c502777 Downloading [====> ] 5.406MB/56.52MB 17:03:20 6ac0e4adf315 Downloading [==========================> ] 32.44MB/62.07MB 17:03:20 1617e25568b2 Pull complete 17:03:20 408012a7b118 Downloading [==================================================>] 637B/637B 17:03:20 408012a7b118 Verifying Checksum 17:03:20 408012a7b118 Download complete 17:03:20 a40760cd2625 Extracting [=================> ] 29.52MB/84.46MB 17:03:20 44986281b8b9 Downloading [=====================================> ] 3.011kB/4.022kB 17:03:20 44986281b8b9 Downloading [==================================================>] 4.022kB/4.022kB 17:03:20 44986281b8b9 Verifying Checksum 17:03:20 44986281b8b9 Download complete 17:03:20 353af139d39e Extracting [> ] 557.1kB/246.5MB 17:03:20 bf70c5107ab5 Downloading [==================================================>] 1.44kB/1.44kB 17:03:20 bf70c5107ab5 Verifying Checksum 17:03:20 bf70c5107ab5 Download complete 17:03:20 1ccde423731d Downloading [==> ] 3.01kB/61.44kB 17:03:20 1ccde423731d Downloading [==================================================>] 61.44kB/61.44kB 17:03:20 1ccde423731d Verifying Checksum 17:03:20 1ccde423731d Download complete 17:03:20 f3b09c502777 Downloading [===========> ] 13.52MB/56.52MB 17:03:20 7221d93db8a9 Downloading [==================================================>] 100B/100B 17:03:20 7221d93db8a9 Verifying Checksum 17:03:20 7221d93db8a9 Download complete 17:03:20 6ac0e4adf315 Downloading [====================================> ] 44.87MB/62.07MB 17:03:20 a40760cd2625 Extracting [=====================> ] 36.21MB/84.46MB 17:03:20 7df673c7455d Downloading [==================================================>] 694B/694B 17:03:20 7df673c7455d Verifying Checksum 17:03:20 7df673c7455d Download complete 17:03:20 6cf350721225 Extracting [> ] 557.1kB/98.32MB 17:03:20 f18232174bc9 Downloading [> ] 48.06kB/3.642MB 17:03:20 f3b09c502777 Downloading [======================> ] 24.87MB/56.52MB 17:03:20 6ac0e4adf315 Downloading [=============================================> ] 56.77MB/62.07MB 17:03:20 a40760cd2625 Extracting [==========================> ] 45.12MB/84.46MB 17:03:20 6cf350721225 Extracting [=====> ] 10.03MB/98.32MB 17:03:20 6ac0e4adf315 Download complete 17:03:20 f18232174bc9 Downloading [==================> ] 1.375MB/3.642MB 17:03:20 353af139d39e Extracting [> ] 1.114MB/246.5MB 17:03:20 65babbe3dfe5 Downloading [==================================================>] 141B/141B 17:03:20 65babbe3dfe5 Verifying Checksum 17:03:20 65babbe3dfe5 Download complete 17:03:20 f3b09c502777 Downloading [===========================> ] 30.82MB/56.52MB 17:03:20 651b0ba49b07 Downloading [> ] 48.06kB/3.524MB 17:03:20 f18232174bc9 Verifying Checksum 17:03:20 f18232174bc9 Download complete 17:03:20 f18232174bc9 Extracting [> ] 65.54kB/3.642MB 17:03:20 a40760cd2625 Extracting [==============================> ] 51.25MB/84.46MB 17:03:21 6cf350721225 Extracting [=======> ] 15.6MB/98.32MB 17:03:21 d953cde4314b Downloading [> ] 97.22kB/8.735MB 17:03:21 6ac0e4adf315 Extracting [> ] 557.1kB/62.07MB 17:03:21 353af139d39e Extracting [=> ] 6.685MB/246.5MB 17:03:21 f3b09c502777 Downloading [==================================> ] 39.47MB/56.52MB 17:03:21 651b0ba49b07 Downloading [==================================================>] 3.524MB/3.524MB 17:03:21 651b0ba49b07 Verifying Checksum 17:03:21 651b0ba49b07 Download complete 17:03:21 aecd4cb03450 Downloading [==> ] 3.01kB/58.08kB 17:03:21 aecd4cb03450 Downloading [==================================================>] 58.08kB/58.08kB 17:03:21 aecd4cb03450 Verifying Checksum 17:03:21 aecd4cb03450 Download complete 17:03:21 f18232174bc9 Extracting [==========> ] 786.4kB/3.642MB 17:03:21 a40760cd2625 Extracting [================================> ] 55.71MB/84.46MB 17:03:21 13fa68ca8757 Downloading [=====> ] 3.01kB/27.77kB 17:03:21 13fa68ca8757 Downloading [==================================================>] 27.77kB/27.77kB 17:03:21 13fa68ca8757 Download complete 17:03:21 6cf350721225 Extracting [===========> ] 22.84MB/98.32MB 17:03:21 d953cde4314b Downloading [============================> ] 4.914MB/8.735MB 17:03:21 f836d47fdc4d Downloading [> ] 539.6kB/107.3MB 17:03:21 353af139d39e Extracting [===> ] 15.6MB/246.5MB 17:03:21 f3b09c502777 Downloading [============================================> ] 50.28MB/56.52MB 17:03:21 6ac0e4adf315 Extracting [===> ] 3.899MB/62.07MB 17:03:21 d953cde4314b Verifying Checksum 17:03:21 f18232174bc9 Extracting [=================================================> ] 3.604MB/3.642MB 17:03:21 f18232174bc9 Extracting [==================================================>] 3.642MB/3.642MB 17:03:21 a40760cd2625 Extracting [=====================================> ] 64.06MB/84.46MB 17:03:21 8b5292c940e1 Downloading [> ] 539.6kB/63.48MB 17:03:21 f18232174bc9 Extracting [==================================================>] 3.642MB/3.642MB 17:03:21 6cf350721225 Extracting [==============> ] 28.97MB/98.32MB 17:03:21 f3b09c502777 Verifying Checksum 17:03:21 f3b09c502777 Download complete 17:03:21 454a4350d439 Downloading [============> ] 3.01kB/11.93kB 17:03:21 454a4350d439 Downloading [==================================================>] 11.93kB/11.93kB 17:03:21 454a4350d439 Verifying Checksum 17:03:21 454a4350d439 Download complete 17:03:21 f836d47fdc4d Downloading [===> ] 7.568MB/107.3MB 17:03:21 f18232174bc9 Pull complete 17:03:21 65babbe3dfe5 Extracting [==================================================>] 141B/141B 17:03:21 65babbe3dfe5 Extracting [==================================================>] 141B/141B 17:03:21 9a8c18aee5ea Downloading [==================================================>] 1.227kB/1.227kB 17:03:21 9a8c18aee5ea Verifying Checksum 17:03:21 9a8c18aee5ea Download complete 17:03:21 353af139d39e Extracting [====> ] 22.28MB/246.5MB 17:03:21 6ac0e4adf315 Extracting [=====> ] 7.242MB/62.07MB 17:03:21 10ac4908093d Downloading [> ] 310.2kB/30.43MB 17:03:21 8b5292c940e1 Downloading [====> ] 5.946MB/63.48MB 17:03:21 a40760cd2625 Extracting [===========================================> ] 72.97MB/84.46MB 17:03:21 6cf350721225 Extracting [==================> ] 36.77MB/98.32MB 17:03:21 f836d47fdc4d Downloading [=======> ] 15.68MB/107.3MB 17:03:21 353af139d39e Extracting [=====> ] 28.97MB/246.5MB 17:03:21 6ac0e4adf315 Extracting [=======> ] 9.47MB/62.07MB 17:03:21 a40760cd2625 Extracting [===============================================> ] 80.77MB/84.46MB 17:03:21 10ac4908093d Downloading [======> ] 3.734MB/30.43MB 17:03:21 8b5292c940e1 Downloading [===========> ] 14.06MB/63.48MB 17:03:21 6cf350721225 Extracting [=====================> ] 42.89MB/98.32MB 17:03:21 a40760cd2625 Extracting [==================================================>] 84.46MB/84.46MB 17:03:21 f836d47fdc4d Downloading [============> ] 27.57MB/107.3MB 17:03:21 65babbe3dfe5 Pull complete 17:03:21 651b0ba49b07 Extracting [> ] 65.54kB/3.524MB 17:03:21 353af139d39e Extracting [=======> ] 37.32MB/246.5MB 17:03:21 a40760cd2625 Pull complete 17:03:21 114f99593bd8 Extracting [==================================================>] 1.119kB/1.119kB 17:03:21 114f99593bd8 Extracting [==================================================>] 1.119kB/1.119kB 17:03:21 6ac0e4adf315 Extracting [=========> ] 12.26MB/62.07MB 17:03:21 10ac4908093d Downloading [=============> ] 8.093MB/30.43MB 17:03:21 8b5292c940e1 Downloading [=====================> ] 27.03MB/63.48MB 17:03:21 6cf350721225 Extracting [=========================> ] 49.58MB/98.32MB 17:03:21 f836d47fdc4d Downloading [====================> ] 44.33MB/107.3MB 17:03:21 353af139d39e Extracting [========> ] 44.01MB/246.5MB 17:03:21 6ac0e4adf315 Extracting [============> ] 15.6MB/62.07MB 17:03:21 10ac4908093d Downloading [==================> ] 11.52MB/30.43MB 17:03:21 8b5292c940e1 Downloading [===============================> ] 39.47MB/63.48MB 17:03:21 6cf350721225 Extracting [============================> ] 56.26MB/98.32MB 17:03:21 114f99593bd8 Pull complete 17:03:21 651b0ba49b07 Extracting [====> ] 327.7kB/3.524MB 17:03:21 f836d47fdc4d Downloading [========================> ] 53.53MB/107.3MB 17:03:21 353af139d39e Extracting [=========> ] 49.02MB/246.5MB 17:03:21 api Pulled 17:03:21 6ac0e4adf315 Extracting [==============> ] 18.38MB/62.07MB 17:03:21 8b5292c940e1 Downloading [======================================> ] 48.66MB/63.48MB 17:03:21 6cf350721225 Extracting [===============================> ] 61.83MB/98.32MB 17:03:21 651b0ba49b07 Extracting [================================> ] 2.294MB/3.524MB 17:03:21 f836d47fdc4d Downloading [============================> ] 62.18MB/107.3MB 17:03:21 353af139d39e Extracting [===========> ] 54.59MB/246.5MB 17:03:21 651b0ba49b07 Extracting [==================================================>] 3.524MB/3.524MB 17:03:21 6ac0e4adf315 Extracting [==================> ] 23.4MB/62.07MB 17:03:21 651b0ba49b07 Extracting [==================================================>] 3.524MB/3.524MB 17:03:21 6cf350721225 Extracting [==================================> ] 67.4MB/98.32MB 17:03:21 f836d47fdc4d Downloading [==================================> ] 74.07MB/107.3MB 17:03:21 353af139d39e Extracting [============> ] 60.72MB/246.5MB 17:03:21 6ac0e4adf315 Extracting [===================> ] 24.51MB/62.07MB 17:03:21 6cf350721225 Extracting [======================================> ] 75.76MB/98.32MB 17:03:21 f836d47fdc4d Downloading [========================================> ] 85.97MB/107.3MB 17:03:21 651b0ba49b07 Pull complete 17:03:21 d953cde4314b Extracting [> ] 98.3kB/8.735MB 17:03:22 353af139d39e Extracting [==============> ] 69.63MB/246.5MB 17:03:22 6ac0e4adf315 Extracting [======================> ] 27.85MB/62.07MB 17:03:22 6cf350721225 Extracting [===========================================> ] 84.67MB/98.32MB 17:03:22 f836d47fdc4d Downloading [============================================> ] 96.24MB/107.3MB 17:03:22 353af139d39e Extracting [===============> ] 76.32MB/246.5MB 17:03:22 6ac0e4adf315 Extracting [========================> ] 30.64MB/62.07MB 17:03:22 d953cde4314b Extracting [==> ] 393.2kB/8.735MB 17:03:22 6cf350721225 Extracting [===============================================> ] 93.03MB/98.32MB 17:03:22 353af139d39e Extracting [==================> ] 89.13MB/246.5MB 17:03:22 6ac0e4adf315 Extracting [=============================> ] 36.21MB/62.07MB 17:03:22 6cf350721225 Extracting [==================================================>] 98.32MB/98.32MB 17:03:22 d953cde4314b Extracting [===========================> ] 4.817MB/8.735MB 17:03:22 6cf350721225 Pull complete 17:03:22 de723b4c7ed9 Extracting [==================================================>] 1.297kB/1.297kB 17:03:22 de723b4c7ed9 Extracting [==================================================>] 1.297kB/1.297kB 17:03:22 353af139d39e Extracting [====================> ] 101.4MB/246.5MB 17:03:22 6ac0e4adf315 Extracting [=======================================> ] 49.02MB/62.07MB 17:03:22 d953cde4314b Extracting [=============================================> ] 7.963MB/8.735MB 17:03:22 f836d47fdc4d Downloading [===============================================> ] 102.7MB/107.3MB 17:03:22 8b5292c940e1 Downloading [===========================================> ] 55.69MB/63.48MB 17:03:22 10ac4908093d Downloading [=======================> ] 14.01MB/30.43MB 17:03:22 d953cde4314b Extracting [==================================================>] 8.735MB/8.735MB 17:03:22 f836d47fdc4d Verifying Checksum 17:03:22 f836d47fdc4d Download complete 17:03:22 353af139d39e Extracting [======================> ] 112MB/246.5MB 17:03:22 d953cde4314b Pull complete 17:03:22 aecd4cb03450 Extracting [============================> ] 32.77kB/58.08kB 17:03:22 aecd4cb03450 Extracting [==================================================>] 58.08kB/58.08kB 17:03:22 44779101e748 Download complete 17:03:22 6ac0e4adf315 Extracting [================================================> ] 60.16MB/62.07MB 17:03:22 8b5292c940e1 Verifying Checksum 17:03:22 8b5292c940e1 Download complete 17:03:22 de723b4c7ed9 Pull complete 17:03:22 a721db3e3f3d Downloading [> ] 64.45kB/5.526MB 17:03:22 1850a929b84a Downloading [==================================================>] 149B/149B 17:03:22 1850a929b84a Verifying Checksum 17:03:22 1850a929b84a Download complete 17:03:22 10ac4908093d Downloading [===============================> ] 19.3MB/30.43MB 17:03:22 pap Pulled 17:03:22 397a918c7da3 Downloading [==================================================>] 327B/327B 17:03:22 397a918c7da3 Verifying Checksum 17:03:22 397a918c7da3 Download complete 17:03:22 806be17e856d Downloading [> ] 539.6kB/89.72MB 17:03:22 353af139d39e Extracting [========================> ] 122MB/246.5MB 17:03:22 10ac4908093d Download complete 17:03:22 6ac0e4adf315 Extracting [==================================================>] 62.07MB/62.07MB 17:03:22 a721db3e3f3d Downloading [=============================================> ] 5.045MB/5.526MB 17:03:22 a721db3e3f3d Verifying Checksum 17:03:22 a721db3e3f3d Download complete 17:03:22 634de6c90876 Downloading [===========================================> ] 3.011kB/3.49kB 17:03:22 634de6c90876 Downloading [==================================================>] 3.49kB/3.49kB 17:03:22 634de6c90876 Download complete 17:03:22 aecd4cb03450 Pull complete 17:03:22 13fa68ca8757 Extracting [==================================================>] 27.77kB/27.77kB 17:03:22 13fa68ca8757 Extracting [==================================================>] 27.77kB/27.77kB 17:03:22 cd00854cfb1a Downloading [=====================> ] 3.011kB/6.971kB 17:03:22 cd00854cfb1a Downloading [==================================================>] 6.971kB/6.971kB 17:03:22 cd00854cfb1a Verifying Checksum 17:03:22 cd00854cfb1a Download complete 17:03:22 806be17e856d Downloading [====> ] 8.109MB/89.72MB 17:03:22 6ac0e4adf315 Pull complete 17:03:22 353af139d39e Extracting [==========================> ] 132.6MB/246.5MB 17:03:22 10ac4908093d Extracting [> ] 327.7kB/30.43MB 17:03:22 13fa68ca8757 Pull complete 17:03:22 f3b09c502777 Extracting [> ] 557.1kB/56.52MB 17:03:22 e444bcd4d577 Download complete 17:03:22 e444bcd4d577 Download complete 17:03:22 806be17e856d Downloading [===========> ] 20MB/89.72MB 17:03:22 eca0188f477e Downloading [> ] 375.2kB/37.17MB 17:03:22 eca0188f477e Downloading [> ] 375.2kB/37.17MB 17:03:22 353af139d39e Extracting [============================> ] 140.9MB/246.5MB 17:03:22 10ac4908093d Extracting [=====> ] 3.604MB/30.43MB 17:03:22 806be17e856d Downloading [==================> ] 33.52MB/89.72MB 17:03:22 f3b09c502777 Extracting [===> ] 3.899MB/56.52MB 17:03:22 eca0188f477e Downloading [========> ] 6.044MB/37.17MB 17:03:22 eca0188f477e Downloading [========> ] 6.044MB/37.17MB 17:03:22 f836d47fdc4d Extracting [> ] 557.1kB/107.3MB 17:03:22 eabd8714fec9 Downloading [> ] 527.8kB/375MB 17:03:22 eabd8714fec9 Downloading [> ] 527.8kB/375MB 17:03:22 353af139d39e Extracting [=============================> ] 147.1MB/246.5MB 17:03:22 10ac4908093d Extracting [==========> ] 6.226MB/30.43MB 17:03:22 806be17e856d Downloading [========================> ] 44.33MB/89.72MB 17:03:22 eca0188f477e Downloading [==========> ] 7.551MB/37.17MB 17:03:22 eca0188f477e Downloading [==========> ] 7.551MB/37.17MB 17:03:22 eabd8714fec9 Downloading [> ] 7.465MB/375MB 17:03:22 eabd8714fec9 Downloading [> ] 7.465MB/375MB 17:03:22 f3b09c502777 Extracting [=====> ] 6.128MB/56.52MB 17:03:22 353af139d39e Extracting [===============================> ] 153.2MB/246.5MB 17:03:22 f836d47fdc4d Extracting [=> ] 2.785MB/107.3MB 17:03:23 10ac4908093d Extracting [============> ] 7.864MB/30.43MB 17:03:23 806be17e856d Downloading [===============================> ] 57.31MB/89.72MB 17:03:23 eca0188f477e Downloading [===================> ] 14.33MB/37.17MB 17:03:23 eca0188f477e Downloading [===================> ] 14.33MB/37.17MB 17:03:23 f3b09c502777 Extracting [======> ] 7.799MB/56.52MB 17:03:23 353af139d39e Extracting [================================> ] 159.3MB/246.5MB 17:03:23 f836d47fdc4d Extracting [==> ] 4.456MB/107.3MB 17:03:23 eabd8714fec9 Downloading [=> ] 13.33MB/375MB 17:03:23 eabd8714fec9 Downloading [=> ] 13.33MB/375MB 17:03:23 10ac4908093d Extracting [===============> ] 9.503MB/30.43MB 17:03:23 806be17e856d Downloading [======================================> ] 68.66MB/89.72MB 17:03:23 eca0188f477e Downloading [============================> ] 21.12MB/37.17MB 17:03:23 eca0188f477e Downloading [============================> ] 21.12MB/37.17MB 17:03:23 f3b09c502777 Extracting [========> ] 9.47MB/56.52MB 17:03:23 353af139d39e Extracting [=================================> ] 165.4MB/246.5MB 17:03:23 f836d47fdc4d Extracting [===> ] 6.685MB/107.3MB 17:03:23 eabd8714fec9 Downloading [==> ] 18.68MB/375MB 17:03:23 eabd8714fec9 Downloading [==> ] 18.68MB/375MB 17:03:23 10ac4908093d Extracting [===================> ] 12.12MB/30.43MB 17:03:23 806be17e856d Downloading [=============================================> ] 81.1MB/89.72MB 17:03:23 eca0188f477e Downloading [========================================> ] 29.8MB/37.17MB 17:03:23 eca0188f477e Downloading [========================================> ] 29.8MB/37.17MB 17:03:23 353af139d39e Extracting [==================================> ] 172.1MB/246.5MB 17:03:23 f836d47fdc4d Extracting [====> ] 9.47MB/107.3MB 17:03:23 f3b09c502777 Extracting [==========> ] 11.7MB/56.52MB 17:03:23 eabd8714fec9 Downloading [===> ] 23.51MB/375MB 17:03:23 eabd8714fec9 Downloading [===> ] 23.51MB/375MB 17:03:23 10ac4908093d Extracting [========================> ] 14.75MB/30.43MB 17:03:23 806be17e856d Verifying Checksum 17:03:23 806be17e856d Download complete 17:03:23 353af139d39e Extracting [====================================> ] 178.3MB/246.5MB 17:03:23 eabd8714fec9 Downloading [====> ] 33.17MB/375MB 17:03:23 eabd8714fec9 Downloading [====> ] 33.17MB/375MB 17:03:23 f836d47fdc4d Extracting [=====> ] 12.26MB/107.3MB 17:03:23 f3b09c502777 Extracting [=============> ] 15.04MB/56.52MB 17:03:23 10ac4908093d Extracting [================================> ] 19.99MB/30.43MB 17:03:23 45fd2fec8a19 Downloading [==========================================> ] 934B/1.103kB 17:03:23 45fd2fec8a19 Downloading [==================================================>] 1.103kB/1.103kB 17:03:23 45fd2fec8a19 Verifying Checksum 17:03:23 45fd2fec8a19 Download complete 17:03:23 45fd2fec8a19 Verifying Checksum 17:03:23 45fd2fec8a19 Download complete 17:03:23 eca0188f477e Downloading [=========================================> ] 30.94MB/37.17MB 17:03:23 eca0188f477e Downloading [=========================================> ] 30.94MB/37.17MB 17:03:23 353af139d39e Extracting [=====================================> ] 185.5MB/246.5MB 17:03:23 eabd8714fec9 Downloading [=====> ] 40.15MB/375MB 17:03:23 eabd8714fec9 Downloading [=====> ] 40.15MB/375MB 17:03:23 f836d47fdc4d Extracting [======> ] 14.48MB/107.3MB 17:03:23 10ac4908093d Extracting [=======================================> ] 23.92MB/30.43MB 17:03:23 f3b09c502777 Extracting [================> ] 18.94MB/56.52MB 17:03:23 8f10199ed94b Downloading [> ] 90.21kB/8.768MB 17:03:23 8f10199ed94b Downloading [> ] 90.21kB/8.768MB 17:03:23 eca0188f477e Downloading [===============================================> ] 35.46MB/37.17MB 17:03:23 eca0188f477e Downloading [===============================================> ] 35.46MB/37.17MB 17:03:23 353af139d39e Extracting [======================================> ] 191.6MB/246.5MB 17:03:23 eca0188f477e Verifying Checksum 17:03:23 eca0188f477e Verifying Checksum 17:03:23 eca0188f477e Download complete 17:03:23 eca0188f477e Download complete 17:03:23 eabd8714fec9 Downloading [======> ] 47.14MB/375MB 17:03:23 eabd8714fec9 Downloading [======> ] 47.14MB/375MB 17:03:23 f836d47fdc4d Extracting [=======> ] 16.71MB/107.3MB 17:03:23 f3b09c502777 Extracting [=================> ] 20.05MB/56.52MB 17:03:23 8f10199ed94b Downloading [=====================> ] 3.775MB/8.768MB 17:03:23 8f10199ed94b Downloading [=====================> ] 3.775MB/8.768MB 17:03:23 353af139d39e Extracting [========================================> ] 199.4MB/246.5MB 17:03:23 10ac4908093d Extracting [===========================================> ] 26.54MB/30.43MB 17:03:23 eabd8714fec9 Downloading [======> ] 49.82MB/375MB 17:03:23 eabd8714fec9 Downloading [======> ] 49.82MB/375MB 17:03:23 f963a77d2726 Downloading [==> ] 933B/21.44kB 17:03:23 f963a77d2726 Downloading [==> ] 933B/21.44kB 17:03:23 f963a77d2726 Downloading [==================================================>] 21.44kB/21.44kB 17:03:23 f963a77d2726 Verifying Checksum 17:03:23 f963a77d2726 Verifying Checksum 17:03:23 f963a77d2726 Download complete 17:03:23 f963a77d2726 Download complete 17:03:23 8f10199ed94b Downloading [================================> ] 5.733MB/8.768MB 17:03:23 8f10199ed94b Downloading [================================> ] 5.733MB/8.768MB 17:03:23 f3b09c502777 Extracting [====================> ] 23.4MB/56.52MB 17:03:23 eca0188f477e Extracting [> ] 393.2kB/37.17MB 17:03:23 eca0188f477e Extracting [> ] 393.2kB/37.17MB 17:03:23 353af139d39e Extracting [=========================================> ] 205MB/246.5MB 17:03:23 10ac4908093d Extracting [==============================================> ] 28.51MB/30.43MB 17:03:23 f836d47fdc4d Extracting [========> ] 17.83MB/107.3MB 17:03:23 8f10199ed94b Downloading [==================================================>] 8.768MB/8.768MB 17:03:23 8f10199ed94b Download complete 17:03:23 8f10199ed94b Downloading [==================================================>] 8.768MB/8.768MB 17:03:23 8f10199ed94b Verifying Checksum 17:03:23 8f10199ed94b Download complete 17:03:23 eabd8714fec9 Downloading [=======> ] 55.18MB/375MB 17:03:23 eabd8714fec9 Downloading [=======> ] 55.18MB/375MB 17:03:23 f3b09c502777 Extracting [=======================> ] 26.18MB/56.52MB 17:03:23 eca0188f477e Extracting [====> ] 3.146MB/37.17MB 17:03:23 eca0188f477e Extracting [====> ] 3.146MB/37.17MB 17:03:23 f3a82e9f1761 Downloading [> ] 457.2kB/44.41MB 17:03:23 f3a82e9f1761 Downloading [> ] 457.2kB/44.41MB 17:03:23 353af139d39e Extracting [==========================================> ] 209.5MB/246.5MB 17:03:23 10ac4908093d Extracting [===============================================> ] 29.16MB/30.43MB 17:03:23 f836d47fdc4d Extracting [=========> ] 19.5MB/107.3MB 17:03:23 eabd8714fec9 Downloading [========> ] 60MB/375MB 17:03:23 eabd8714fec9 Downloading [========> ] 60MB/375MB 17:03:23 79161a3f5362 Downloading [==========> ] 934B/4.656kB 17:03:23 79161a3f5362 Downloading [==================================================>] 4.656kB/4.656kB 17:03:23 79161a3f5362 Downloading [==========> ] 934B/4.656kB 17:03:23 79161a3f5362 Downloading [==================================================>] 4.656kB/4.656kB 17:03:23 79161a3f5362 Verifying Checksum 17:03:23 79161a3f5362 Verifying Checksum 17:03:23 79161a3f5362 Download complete 17:03:23 79161a3f5362 Download complete 17:03:23 eca0188f477e Extracting [========> ] 6.291MB/37.17MB 17:03:23 eca0188f477e Extracting [========> ] 6.291MB/37.17MB 17:03:24 f3a82e9f1761 Downloading [=======> ] 6.826MB/44.41MB 17:03:24 f3a82e9f1761 Downloading [=======> ] 6.826MB/44.41MB 17:03:24 f3b09c502777 Extracting [========================> ] 27.85MB/56.52MB 17:03:24 f836d47fdc4d Extracting [==========> ] 22.28MB/107.3MB 17:03:24 353af139d39e Extracting [===========================================> ] 213.9MB/246.5MB 17:03:24 10ac4908093d Extracting [=================================================> ] 30.15MB/30.43MB 17:03:24 eabd8714fec9 Downloading [========> ] 64.81MB/375MB 17:03:24 eabd8714fec9 Downloading [========> ] 64.81MB/375MB 17:03:24 eca0188f477e Extracting [===========> ] 8.651MB/37.17MB 17:03:24 eca0188f477e Extracting [===========> ] 8.651MB/37.17MB 17:03:24 9c266ba63f51 Downloading [==========================================> ] 934B/1.105kB 17:03:24 9c266ba63f51 Downloading [==========================================> ] 934B/1.105kB 17:03:24 9c266ba63f51 Download complete 17:03:24 9c266ba63f51 Download complete 17:03:24 f3a82e9f1761 Downloading [==============> ] 12.72MB/44.41MB 17:03:24 f3a82e9f1761 Downloading [==============> ] 12.72MB/44.41MB 17:03:24 f3b09c502777 Extracting [=============================> ] 32.87MB/56.52MB 17:03:24 353af139d39e Extracting [============================================> ] 219.5MB/246.5MB 17:03:24 f836d47fdc4d Extracting [===========> ] 25.07MB/107.3MB 17:03:24 10ac4908093d Extracting [==================================================>] 30.43MB/30.43MB 17:03:24 eabd8714fec9 Downloading [=========> ] 71.25MB/375MB 17:03:24 eabd8714fec9 Downloading [=========> ] 71.25MB/375MB 17:03:24 eca0188f477e Extracting [==============> ] 10.62MB/37.17MB 17:03:24 eca0188f477e Extracting [==============> ] 10.62MB/37.17MB 17:03:24 f3b09c502777 Extracting [=====================================> ] 42.89MB/56.52MB 17:03:24 f3a82e9f1761 Downloading [=====================> ] 19.05MB/44.41MB 17:03:24 f3a82e9f1761 Downloading [=====================> ] 19.05MB/44.41MB 17:03:24 353af139d39e Extracting [==============================================> ] 228.4MB/246.5MB 17:03:24 f836d47fdc4d Extracting [=============> ] 28.41MB/107.3MB 17:03:24 2e8a7df9c2ee Downloading [==================================================>] 851B/851B 17:03:24 2e8a7df9c2ee Verifying Checksum 17:03:24 2e8a7df9c2ee Verifying Checksum 17:03:24 2e8a7df9c2ee Download complete 17:03:24 2e8a7df9c2ee Download complete 17:03:24 eabd8714fec9 Downloading [==========> ] 76.08MB/375MB 17:03:24 eabd8714fec9 Downloading [==========> ] 76.08MB/375MB 17:03:24 10ac4908093d Pull complete 17:03:24 44779101e748 Extracting [==================================================>] 1.744kB/1.744kB 17:03:24 44779101e748 Extracting [==================================================>] 1.744kB/1.744kB 17:03:24 eca0188f477e Extracting [====================> ] 15.34MB/37.17MB 17:03:24 eca0188f477e Extracting [====================> ] 15.34MB/37.17MB 17:03:24 f3a82e9f1761 Downloading [============================> ] 25.43MB/44.41MB 17:03:24 f3a82e9f1761 Downloading [============================> ] 25.43MB/44.41MB 17:03:24 f3b09c502777 Extracting [==============================================> ] 52.36MB/56.52MB 17:03:24 353af139d39e Extracting [================================================> ] 236.7MB/246.5MB 17:03:24 f836d47fdc4d Extracting [===============> ] 32.87MB/107.3MB 17:03:24 10f05dd8b1db Downloading [==================================================>] 98B/98B 17:03:24 10f05dd8b1db Verifying Checksum 17:03:24 10f05dd8b1db Download complete 17:03:24 10f05dd8b1db Verifying Checksum 17:03:24 10f05dd8b1db Download complete 17:03:24 eabd8714fec9 Downloading [==========> ] 80.39MB/375MB 17:03:24 eabd8714fec9 Downloading [==========> ] 80.39MB/375MB 17:03:24 eca0188f477e Extracting [=========================> ] 19.27MB/37.17MB 17:03:24 eca0188f477e Extracting [=========================> ] 19.27MB/37.17MB 17:03:24 f3a82e9f1761 Downloading [=================================> ] 29.95MB/44.41MB 17:03:24 f3a82e9f1761 Downloading [=================================> ] 29.95MB/44.41MB 17:03:24 f3b09c502777 Extracting [=================================================> ] 55.71MB/56.52MB 17:03:24 353af139d39e Extracting [==================================================>] 246.5MB/246.5MB 17:03:24 f836d47fdc4d Extracting [================> ] 35.65MB/107.3MB 17:03:24 44779101e748 Pull complete 17:03:24 eabd8714fec9 Downloading [===========> ] 85.77MB/375MB 17:03:24 eabd8714fec9 Downloading [===========> ] 85.77MB/375MB 17:03:24 41dac8b43ba6 Download complete 17:03:24 41dac8b43ba6 Download complete 17:03:24 353af139d39e Pull complete 17:03:24 a721db3e3f3d Extracting [> ] 65.54kB/5.526MB 17:03:24 eca0188f477e Extracting [===============================> ] 23.59MB/37.17MB 17:03:24 eca0188f477e Extracting [===============================> ] 23.59MB/37.17MB 17:03:24 f3b09c502777 Extracting [==================================================>] 56.52MB/56.52MB 17:03:24 f3a82e9f1761 Downloading [=======================================> ] 34.96MB/44.41MB 17:03:24 f3a82e9f1761 Downloading [=======================================> ] 34.96MB/44.41MB 17:03:24 apex-pdp Pulled 17:03:24 f836d47fdc4d Extracting [=================> ] 38.44MB/107.3MB 17:03:24 f3b09c502777 Pull complete 17:03:24 408012a7b118 Extracting [==================================================>] 637B/637B 17:03:24 408012a7b118 Extracting [==================================================>] 637B/637B 17:03:24 eabd8714fec9 Downloading [============> ] 91.65MB/375MB 17:03:24 eabd8714fec9 Downloading [============> ] 91.65MB/375MB 17:03:24 a721db3e3f3d Extracting [==> ] 262.1kB/5.526MB 17:03:24 f3a82e9f1761 Downloading [================================================> ] 42.71MB/44.41MB 17:03:24 f3a82e9f1761 Downloading [================================================> ] 42.71MB/44.41MB 17:03:24 eca0188f477e Extracting [===================================> ] 26.74MB/37.17MB 17:03:24 eca0188f477e Extracting [===================================> ] 26.74MB/37.17MB 17:03:24 71a9f6a9ab4d Downloading [> ] 3.67kB/230.6kB 17:03:24 71a9f6a9ab4d Downloading [> ] 3.67kB/230.6kB 17:03:24 f3a82e9f1761 Verifying Checksum 17:03:24 f3a82e9f1761 Download complete 17:03:24 f3a82e9f1761 Verifying Checksum 17:03:24 f3a82e9f1761 Download complete 17:03:24 71a9f6a9ab4d Verifying Checksum 17:03:24 71a9f6a9ab4d Download complete 17:03:24 71a9f6a9ab4d Verifying Checksum 17:03:24 71a9f6a9ab4d Download complete 17:03:24 f836d47fdc4d Extracting [===================> ] 41.22MB/107.3MB 17:03:24 eabd8714fec9 Downloading [=============> ] 99.75MB/375MB 17:03:24 eabd8714fec9 Downloading [=============> ] 99.75MB/375MB 17:03:24 a721db3e3f3d Extracting [==================================> ] 3.801MB/5.526MB 17:03:24 eca0188f477e Extracting [==========================================> ] 31.85MB/37.17MB 17:03:24 eca0188f477e Extracting [==========================================> ] 31.85MB/37.17MB 17:03:24 408012a7b118 Pull complete 17:03:24 f836d47fdc4d Extracting [=====================> ] 45.68MB/107.3MB 17:03:24 5ee96432c7eb Downloading [============> ] 934B/3.628kB 17:03:24 5ee96432c7eb Downloading [==================================================>] 3.628kB/3.628kB 17:03:24 5ee96432c7eb Verifying Checksum 17:03:24 5ee96432c7eb Download complete 17:03:24 44986281b8b9 Extracting [==================================================>] 4.022kB/4.022kB 17:03:24 44986281b8b9 Extracting [==================================================>] 4.022kB/4.022kB 17:03:24 c81b87c3efcc Downloading [> ] 536kB/127.4MB 17:03:24 eabd8714fec9 Downloading [==============> ] 107.8MB/375MB 17:03:24 eabd8714fec9 Downloading [==============> ] 107.8MB/375MB 17:03:24 a721db3e3f3d Extracting [=========================================> ] 4.588MB/5.526MB 17:03:24 eca0188f477e Extracting [==============================================> ] 34.21MB/37.17MB 17:03:24 eca0188f477e Extracting [==============================================> ] 34.21MB/37.17MB 17:03:24 f836d47fdc4d Extracting [======================> ] 48.46MB/107.3MB 17:03:24 c81b87c3efcc Downloading [==> ] 5.897MB/127.4MB 17:03:24 da3ed5db7103 Downloading [> ] 527.8kB/127.4MB 17:03:24 eabd8714fec9 Downloading [===============> ] 112.6MB/375MB 17:03:24 eabd8714fec9 Downloading [===============> ] 112.6MB/375MB 17:03:24 44986281b8b9 Pull complete 17:03:24 bf70c5107ab5 Extracting [==================================================>] 1.44kB/1.44kB 17:03:24 bf70c5107ab5 Extracting [==================================================>] 1.44kB/1.44kB 17:03:24 a721db3e3f3d Extracting [============================================> ] 4.915MB/5.526MB 17:03:24 a721db3e3f3d Extracting [==================================================>] 5.526MB/5.526MB 17:03:25 eca0188f477e Extracting [================================================> ] 36.18MB/37.17MB 17:03:25 eca0188f477e Extracting [================================================> ] 36.18MB/37.17MB 17:03:25 f836d47fdc4d Extracting [=======================> ] 51.25MB/107.3MB 17:03:25 eca0188f477e Extracting [==================================================>] 37.17MB/37.17MB 17:03:25 eca0188f477e Extracting [==================================================>] 37.17MB/37.17MB 17:03:25 eabd8714fec9 Downloading [===============> ] 115.8MB/375MB 17:03:25 eabd8714fec9 Downloading [===============> ] 115.8MB/375MB 17:03:25 da3ed5db7103 Downloading [=> ] 3.713MB/127.4MB 17:03:25 c81b87c3efcc Downloading [====> ] 10.21MB/127.4MB 17:03:25 a721db3e3f3d Pull complete 17:03:25 1850a929b84a Extracting [==================================================>] 149B/149B 17:03:25 1850a929b84a Extracting [==================================================>] 149B/149B 17:03:25 f836d47fdc4d Extracting [=========================> ] 54.03MB/107.3MB 17:03:25 c81b87c3efcc Downloading [=====> ] 13.42MB/127.4MB 17:03:25 eabd8714fec9 Downloading [================> ] 122.2MB/375MB 17:03:25 eabd8714fec9 Downloading [================> ] 122.2MB/375MB 17:03:25 da3ed5db7103 Downloading [===> ] 9.067MB/127.4MB 17:03:25 bf70c5107ab5 Pull complete 17:03:25 f836d47fdc4d Extracting [===========================> ] 57.93MB/107.3MB 17:03:25 c81b87c3efcc Downloading [======> ] 16.08MB/127.4MB 17:03:25 1ccde423731d Extracting [==========================> ] 32.77kB/61.44kB 17:03:25 1ccde423731d Extracting [==================================================>] 61.44kB/61.44kB 17:03:25 da3ed5db7103 Downloading [=====> ] 13.87MB/127.4MB 17:03:25 eca0188f477e Pull complete 17:03:25 eca0188f477e Pull complete 17:03:25 e444bcd4d577 Extracting [==================================================>] 279B/279B 17:03:25 e444bcd4d577 Extracting [==================================================>] 279B/279B 17:03:25 e444bcd4d577 Extracting [==================================================>] 279B/279B 17:03:25 e444bcd4d577 Extracting [==================================================>] 279B/279B 17:03:25 1850a929b84a Pull complete 17:03:25 eabd8714fec9 Downloading [================> ] 127MB/375MB 17:03:25 eabd8714fec9 Downloading [================> ] 127MB/375MB 17:03:25 397a918c7da3 Extracting [==================================================>] 327B/327B 17:03:25 397a918c7da3 Extracting [==================================================>] 327B/327B 17:03:25 f836d47fdc4d Extracting [=============================> ] 62.39MB/107.3MB 17:03:25 c81b87c3efcc Downloading [=======> ] 19.29MB/127.4MB 17:03:25 da3ed5db7103 Downloading [=======> ] 19.75MB/127.4MB 17:03:25 eabd8714fec9 Downloading [=================> ] 132.4MB/375MB 17:03:25 eabd8714fec9 Downloading [=================> ] 132.4MB/375MB 17:03:25 e444bcd4d577 Pull complete 17:03:25 e444bcd4d577 Pull complete 17:03:25 397a918c7da3 Pull complete 17:03:25 1ccde423731d Pull complete 17:03:25 7221d93db8a9 Extracting [==================================================>] 100B/100B 17:03:25 7221d93db8a9 Extracting [==================================================>] 100B/100B 17:03:25 c81b87c3efcc Downloading [==========> ] 25.67MB/127.4MB 17:03:25 eabd8714fec9 Downloading [==================> ] 139.3MB/375MB 17:03:25 eabd8714fec9 Downloading [==================> ] 139.3MB/375MB 17:03:25 f836d47fdc4d Extracting [==============================> ] 65.73MB/107.3MB 17:03:25 da3ed5db7103 Downloading [========> ] 21.36MB/127.4MB 17:03:25 806be17e856d Extracting [> ] 557.1kB/89.72MB 17:03:25 c81b87c3efcc Downloading [===========> ] 30.49MB/127.4MB 17:03:25 eabd8714fec9 Downloading [===================> ] 142.5MB/375MB 17:03:25 eabd8714fec9 Downloading [===================> ] 142.5MB/375MB 17:03:25 f836d47fdc4d Extracting [===============================> ] 68.52MB/107.3MB 17:03:25 da3ed5db7103 Downloading [===========> ] 28.87MB/127.4MB 17:03:25 7221d93db8a9 Pull complete 17:03:25 7df673c7455d Extracting [==================================================>] 694B/694B 17:03:25 7df673c7455d Extracting [==================================================>] 694B/694B 17:03:25 806be17e856d Extracting [=> ] 3.342MB/89.72MB 17:03:25 f836d47fdc4d Extracting [=================================> ] 71.86MB/107.3MB 17:03:25 eabd8714fec9 Downloading [===================> ] 147.9MB/375MB 17:03:25 eabd8714fec9 Downloading [===================> ] 147.9MB/375MB 17:03:25 da3ed5db7103 Downloading [=============> ] 34.22MB/127.4MB 17:03:25 806be17e856d Extracting [===> ] 6.128MB/89.72MB 17:03:25 7df673c7455d Pull complete 17:03:25 prometheus Pulled 17:03:25 f836d47fdc4d Extracting [==================================> ] 74.65MB/107.3MB 17:03:25 eabd8714fec9 Downloading [====================> ] 151.6MB/375MB 17:03:25 eabd8714fec9 Downloading [====================> ] 151.6MB/375MB 17:03:25 da3ed5db7103 Downloading [===============> ] 40.66MB/127.4MB 17:03:25 806be17e856d Extracting [=====> ] 9.47MB/89.72MB 17:03:25 f836d47fdc4d Extracting [====================================> ] 78.54MB/107.3MB 17:03:25 da3ed5db7103 Downloading [==================> ] 47.08MB/127.4MB 17:03:25 eabd8714fec9 Downloading [====================> ] 154.8MB/375MB 17:03:25 eabd8714fec9 Downloading [====================> ] 154.8MB/375MB 17:03:25 c81b87c3efcc Downloading [============> ] 32.64MB/127.4MB 17:03:25 806be17e856d Extracting [======> ] 12.26MB/89.72MB 17:03:25 f836d47fdc4d Extracting [=====================================> ] 81.33MB/107.3MB 17:03:26 eabd8714fec9 Downloading [=====================> ] 158.1MB/375MB 17:03:26 eabd8714fec9 Downloading [=====================> ] 158.1MB/375MB 17:03:26 da3ed5db7103 Downloading [====================> ] 52.46MB/127.4MB 17:03:26 806be17e856d Extracting [========> ] 15.6MB/89.72MB 17:03:26 c81b87c3efcc Downloading [==============> ] 35.85MB/127.4MB 17:03:26 f836d47fdc4d Extracting [=======================================> ] 84.12MB/107.3MB 17:03:26 da3ed5db7103 Downloading [======================> ] 57.83MB/127.4MB 17:03:26 eabd8714fec9 Downloading [======================> ] 165MB/375MB 17:03:26 eabd8714fec9 Downloading [======================> ] 165MB/375MB 17:03:26 c81b87c3efcc Downloading [===============> ] 39.61MB/127.4MB 17:03:26 806be17e856d Extracting [==========> ] 19.5MB/89.72MB 17:03:26 f836d47fdc4d Extracting [=========================================> ] 89.13MB/107.3MB 17:03:26 da3ed5db7103 Downloading [=======================> ] 60.52MB/127.4MB 17:03:26 eabd8714fec9 Downloading [=======================> ] 172.6MB/375MB 17:03:26 eabd8714fec9 Downloading [=======================> ] 172.6MB/375MB 17:03:26 c81b87c3efcc Downloading [=================> ] 43.9MB/127.4MB 17:03:26 806be17e856d Extracting [============> ] 22.28MB/89.72MB 17:03:26 f836d47fdc4d Extracting [=============================================> ] 98.04MB/107.3MB 17:03:26 eabd8714fec9 Downloading [========================> ] 180.6MB/375MB 17:03:26 eabd8714fec9 Downloading [========================> ] 180.6MB/375MB 17:03:26 da3ed5db7103 Downloading [========================> ] 62.65MB/127.4MB 17:03:26 c81b87c3efcc Downloading [===================> ] 49.25MB/127.4MB 17:03:26 806be17e856d Extracting [=============> ] 25.07MB/89.72MB 17:03:26 f836d47fdc4d Extracting [===============================================> ] 102.5MB/107.3MB 17:03:26 eabd8714fec9 Downloading [========================> ] 185.9MB/375MB 17:03:26 eabd8714fec9 Downloading [========================> ] 185.9MB/375MB 17:03:26 da3ed5db7103 Downloading [=========================> ] 65.86MB/127.4MB 17:03:26 c81b87c3efcc Downloading [=====================> ] 54.6MB/127.4MB 17:03:26 806be17e856d Extracting [===============> ] 27.85MB/89.72MB 17:03:26 eabd8714fec9 Downloading [=========================> ] 192.4MB/375MB 17:03:26 eabd8714fec9 Downloading [=========================> ] 192.4MB/375MB 17:03:26 da3ed5db7103 Downloading [===========================> ] 69.61MB/127.4MB 17:03:26 f836d47fdc4d Extracting [================================================> ] 104.2MB/107.3MB 17:03:26 c81b87c3efcc Downloading [=======================> ] 59.4MB/127.4MB 17:03:26 806be17e856d Extracting [=================> ] 30.64MB/89.72MB 17:03:26 eabd8714fec9 Downloading [==========================> ] 197.2MB/375MB 17:03:26 eabd8714fec9 Downloading [==========================> ] 197.2MB/375MB 17:03:26 da3ed5db7103 Downloading [=============================> ] 74.43MB/127.4MB 17:03:26 c81b87c3efcc Downloading [=========================> ] 64.22MB/127.4MB 17:03:26 f836d47fdc4d Extracting [=================================================> ] 105.3MB/107.3MB 17:03:26 806be17e856d Extracting [==================> ] 33.42MB/89.72MB 17:03:26 eabd8714fec9 Downloading [===========================> ] 203.1MB/375MB 17:03:26 eabd8714fec9 Downloading [===========================> ] 203.1MB/375MB 17:03:26 da3ed5db7103 Downloading [===============================> ] 79.78MB/127.4MB 17:03:26 f836d47fdc4d Extracting [==================================================>] 107.3MB/107.3MB 17:03:26 c81b87c3efcc Downloading [===========================> ] 70.1MB/127.4MB 17:03:26 eabd8714fec9 Downloading [===========================> ] 206.3MB/375MB 17:03:26 eabd8714fec9 Downloading [===========================> ] 206.3MB/375MB 17:03:26 806be17e856d Extracting [====================> ] 36.21MB/89.72MB 17:03:26 da3ed5db7103 Downloading [=================================> ] 85.15MB/127.4MB 17:03:26 f836d47fdc4d Pull complete 17:03:26 c81b87c3efcc Downloading [============================> ] 73.85MB/127.4MB 17:03:26 eabd8714fec9 Downloading [============================> ] 211.1MB/375MB 17:03:26 eabd8714fec9 Downloading [============================> ] 211.1MB/375MB 17:03:26 806be17e856d Extracting [======================> ] 40.11MB/89.72MB 17:03:26 da3ed5db7103 Downloading [==================================> ] 87.84MB/127.4MB 17:03:27 c81b87c3efcc Downloading [==============================> ] 76.54MB/127.4MB 17:03:27 8b5292c940e1 Extracting [> ] 557.1kB/63.48MB 17:03:27 eabd8714fec9 Downloading [=============================> ] 217.5MB/375MB 17:03:27 eabd8714fec9 Downloading [=============================> ] 217.5MB/375MB 17:03:27 806be17e856d Extracting [=======================> ] 42.89MB/89.72MB 17:03:27 da3ed5db7103 Downloading [====================================> ] 92.65MB/127.4MB 17:03:27 c81b87c3efcc Downloading [==============================> ] 78.67MB/127.4MB 17:03:27 eabd8714fec9 Downloading [=============================> ] 224.5MB/375MB 17:03:27 eabd8714fec9 Downloading [=============================> ] 224.5MB/375MB 17:03:27 806be17e856d Extracting [=========================> ] 45.68MB/89.72MB 17:03:27 da3ed5db7103 Downloading [=====================================> ] 96.41MB/127.4MB 17:03:27 8b5292c940e1 Extracting [=> ] 1.671MB/63.48MB 17:03:27 eabd8714fec9 Downloading [==============================> ] 231.5MB/375MB 17:03:27 eabd8714fec9 Downloading [==============================> ] 231.5MB/375MB 17:03:27 806be17e856d Extracting [===========================> ] 49.02MB/89.72MB 17:03:27 c81b87c3efcc Downloading [===============================> ] 80.81MB/127.4MB 17:03:27 8b5292c940e1 Extracting [=> ] 2.228MB/63.48MB 17:03:27 da3ed5db7103 Downloading [======================================> ] 98.56MB/127.4MB 17:03:27 eabd8714fec9 Downloading [===============================> ] 238.4MB/375MB 17:03:27 eabd8714fec9 Downloading [===============================> ] 238.4MB/375MB 17:03:27 806be17e856d Extracting [=============================> ] 53.48MB/89.72MB 17:03:27 c81b87c3efcc Downloading [================================> ] 83.47MB/127.4MB 17:03:27 da3ed5db7103 Downloading [========================================> ] 102.9MB/127.4MB 17:03:27 8b5292c940e1 Extracting [==> ] 2.785MB/63.48MB 17:03:27 eabd8714fec9 Downloading [================================> ] 245.9MB/375MB 17:03:27 eabd8714fec9 Downloading [================================> ] 245.9MB/375MB 17:03:27 806be17e856d Extracting [===============================> ] 57.38MB/89.72MB 17:03:27 c81b87c3efcc Downloading [=================================> ] 86.12MB/127.4MB 17:03:27 da3ed5db7103 Downloading [==========================================> ] 109.3MB/127.4MB 17:03:27 eabd8714fec9 Downloading [=================================> ] 251.3MB/375MB 17:03:27 eabd8714fec9 Downloading [=================================> ] 251.3MB/375MB 17:03:27 806be17e856d Extracting [=================================> ] 60.72MB/89.72MB 17:03:27 c81b87c3efcc Downloading [==================================> ] 88.25MB/127.4MB 17:03:27 da3ed5db7103 Downloading [=============================================> ] 116.3MB/127.4MB 17:03:27 8b5292c940e1 Extracting [===> ] 4.456MB/63.48MB 17:03:27 eabd8714fec9 Downloading [==================================> ] 256.6MB/375MB 17:03:27 eabd8714fec9 Downloading [==================================> ] 256.6MB/375MB 17:03:27 806be17e856d Extracting [====================================> ] 65.18MB/89.72MB 17:03:27 da3ed5db7103 Downloading [================================================> ] 124.9MB/127.4MB 17:03:27 c81b87c3efcc Downloading [===================================> ] 90.91MB/127.4MB 17:03:27 eabd8714fec9 Downloading [==================================> ] 258.8MB/375MB 17:03:27 eabd8714fec9 Downloading [==================================> ] 258.8MB/375MB 17:03:27 da3ed5db7103 Verifying Checksum 17:03:27 da3ed5db7103 Download complete 17:03:27 8b5292c940e1 Extracting [===> ] 5.014MB/63.48MB 17:03:27 806be17e856d Extracting [=====================================> ] 67.96MB/89.72MB 17:03:27 c81b87c3efcc Downloading [=====================================> ] 95.18MB/127.4MB 17:03:27 eabd8714fec9 Downloading [===================================> ] 266.3MB/375MB 17:03:27 eabd8714fec9 Downloading [===================================> ] 266.3MB/375MB 17:03:27 8b5292c940e1 Extracting [=====> ] 7.242MB/63.48MB 17:03:27 806be17e856d Extracting [======================================> ] 69.07MB/89.72MB 17:03:27 c955f6e31a04 Downloading [=============> ] 934B/3.446kB 17:03:27 c955f6e31a04 Downloading [==================================================>] 3.446kB/3.446kB 17:03:27 c955f6e31a04 Verifying Checksum 17:03:27 c955f6e31a04 Download complete 17:03:28 c81b87c3efcc Downloading [======================================> ] 97.85MB/127.4MB 17:03:28 eabd8714fec9 Downloading [====================================> ] 272.7MB/375MB 17:03:28 eabd8714fec9 Downloading [====================================> ] 272.7MB/375MB 17:03:28 806be17e856d Extracting [======================================> ] 69.63MB/89.72MB 17:03:28 eabd8714fec9 Downloading [=====================================> ] 279.7MB/375MB 17:03:28 eabd8714fec9 Downloading [=====================================> ] 279.7MB/375MB 17:03:28 8b5292c940e1 Extracting [======> ] 7.799MB/63.48MB 17:03:28 c81b87c3efcc Downloading [=======================================> ] 99.47MB/127.4MB 17:03:28 806be17e856d Extracting [========================================> ] 71.86MB/89.72MB 17:03:28 eabd8714fec9 Downloading [======================================> ] 286.1MB/375MB 17:03:28 eabd8714fec9 Downloading [======================================> ] 286.1MB/375MB 17:03:28 8b5292c940e1 Extracting [=======> ] 9.47MB/63.48MB 17:03:28 c81b87c3efcc Downloading [========================================> ] 102.7MB/127.4MB 17:03:28 eabd8714fec9 Downloading [=======================================> ] 294.2MB/375MB 17:03:28 eabd8714fec9 Downloading [=======================================> ] 294.2MB/375MB 17:03:28 806be17e856d Extracting [========================================> ] 73.53MB/89.72MB 17:03:28 8b5292c940e1 Extracting [=======> ] 10.03MB/63.48MB 17:03:28 c81b87c3efcc Downloading [=========================================> ] 107MB/127.4MB 17:03:28 eabd8714fec9 Downloading [=======================================> ] 296.8MB/375MB 17:03:28 eabd8714fec9 Downloading [=======================================> ] 296.8MB/375MB 17:03:28 806be17e856d Extracting [==========================================> ] 76.32MB/89.72MB 17:03:28 c81b87c3efcc Downloading [============================================> ] 113.9MB/127.4MB 17:03:28 8b5292c940e1 Extracting [=========> ] 12.26MB/63.48MB 17:03:28 eabd8714fec9 Downloading [========================================> ] 303.2MB/375MB 17:03:28 eabd8714fec9 Downloading [========================================> ] 303.2MB/375MB 17:03:28 806be17e856d Extracting [=============================================> ] 80.77MB/89.72MB 17:03:28 c81b87c3efcc Downloading [===============================================> ] 121.4MB/127.4MB 17:03:28 eabd8714fec9 Downloading [=========================================> ] 309.7MB/375MB 17:03:28 eabd8714fec9 Downloading [=========================================> ] 309.7MB/375MB 17:03:28 8b5292c940e1 Extracting [===========> ] 15.04MB/63.48MB 17:03:28 806be17e856d Extracting [==============================================> ] 83.56MB/89.72MB 17:03:28 c81b87c3efcc Downloading [================================================> ] 124.6MB/127.4MB 17:03:28 eabd8714fec9 Downloading [==========================================> ] 317.2MB/375MB 17:03:28 eabd8714fec9 Downloading [==========================================> ] 317.2MB/375MB 17:03:28 c81b87c3efcc Verifying Checksum 17:03:28 c81b87c3efcc Download complete 17:03:28 8b5292c940e1 Extracting [=============> ] 16.71MB/63.48MB 17:03:28 806be17e856d Extracting [===============================================> ] 85.79MB/89.72MB 17:03:28 eabd8714fec9 Downloading [===========================================> ] 324.7MB/375MB 17:03:28 eabd8714fec9 Downloading [===========================================> ] 324.7MB/375MB 17:03:28 8b5292c940e1 Extracting [==============> ] 18.38MB/63.48MB 17:03:28 806be17e856d Extracting [================================================> ] 87.46MB/89.72MB 17:03:28 eabd8714fec9 Downloading [============================================> ] 331.6MB/375MB 17:03:28 eabd8714fec9 Downloading [============================================> ] 331.6MB/375MB 17:03:29 8b5292c940e1 Extracting [================> ] 21.17MB/63.48MB 17:03:29 eabd8714fec9 Downloading [=============================================> ] 340.7MB/375MB 17:03:29 eabd8714fec9 Downloading [=============================================> ] 340.7MB/375MB 17:03:29 806be17e856d Extracting [=================================================> ] 89.13MB/89.72MB 17:03:29 8b5292c940e1 Extracting [=================> ] 22.84MB/63.48MB 17:03:29 806be17e856d Extracting [==================================================>] 89.72MB/89.72MB 17:03:29 eabd8714fec9 Downloading [==============================================> ] 346.6MB/375MB 17:03:29 eabd8714fec9 Downloading [==============================================> ] 346.6MB/375MB 17:03:29 8b5292c940e1 Extracting [===================> ] 24.51MB/63.48MB 17:03:29 eabd8714fec9 Downloading [==============================================> ] 349.9MB/375MB 17:03:29 8b5292c940e1 Extracting [=====================> ] 27.3MB/63.48MB 17:03:29 eabd8714fec9 Downloading [==============================================> ] 349.9MB/375MB 17:03:29 806be17e856d Pull complete 17:03:29 634de6c90876 Extracting [==================================================>] 3.49kB/3.49kB 17:03:29 634de6c90876 Extracting [==================================================>] 3.49kB/3.49kB 17:03:29 eabd8714fec9 Downloading [===============================================> ] 356.3MB/375MB 17:03:29 eabd8714fec9 Downloading [===============================================> ] 356.3MB/375MB 17:03:29 8b5292c940e1 Extracting [========================> ] 30.64MB/63.48MB 17:03:29 eabd8714fec9 Downloading [================================================> ] 360MB/375MB 17:03:29 eabd8714fec9 Downloading [================================================> ] 360MB/375MB 17:03:29 8b5292c940e1 Extracting [=========================> ] 32.87MB/63.48MB 17:03:29 eabd8714fec9 Downloading [================================================> ] 363.2MB/375MB 17:03:29 eabd8714fec9 Downloading [================================================> ] 363.2MB/375MB 17:03:29 8b5292c940e1 Extracting [============================> ] 36.21MB/63.48MB 17:03:29 eabd8714fec9 Downloading [================================================> ] 364.3MB/375MB 17:03:29 eabd8714fec9 Downloading [================================================> ] 364.3MB/375MB 17:03:29 8b5292c940e1 Extracting [===============================> ] 39.55MB/63.48MB 17:03:29 634de6c90876 Pull complete 17:03:29 cd00854cfb1a Extracting [==================================================>] 6.971kB/6.971kB 17:03:29 cd00854cfb1a Extracting [==================================================>] 6.971kB/6.971kB 17:03:29 eabd8714fec9 Downloading [================================================> ] 367.5MB/375MB 17:03:29 eabd8714fec9 Downloading [================================================> ] 367.5MB/375MB 17:03:29 8b5292c940e1 Extracting [=================================> ] 42.89MB/63.48MB 17:03:29 eabd8714fec9 Downloading [=================================================> ] 373.4MB/375MB 17:03:29 eabd8714fec9 Downloading [=================================================> ] 373.4MB/375MB 17:03:30 8b5292c940e1 Extracting [====================================> ] 46.24MB/63.48MB 17:03:30 8b5292c940e1 Extracting [======================================> ] 49.02MB/63.48MB 17:03:30 cd00854cfb1a Pull complete 17:03:30 mariadb Pulled 17:03:30 8b5292c940e1 Extracting [========================================> ] 51.81MB/63.48MB 17:03:30 eabd8714fec9 Downloading [=================================================> ] 373.9MB/375MB 17:03:30 eabd8714fec9 Downloading [=================================================> ] 373.9MB/375MB 17:03:30 eabd8714fec9 Verifying Checksum 17:03:30 eabd8714fec9 Download complete 17:03:30 eabd8714fec9 Verifying Checksum 17:03:30 eabd8714fec9 Download complete 17:03:30 8b5292c940e1 Extracting [=============================================> ] 57.93MB/63.48MB 17:03:30 eabd8714fec9 Extracting [> ] 557.1kB/375MB 17:03:30 eabd8714fec9 Extracting [> ] 557.1kB/375MB 17:03:30 8b5292c940e1 Extracting [==============================================> ] 59.6MB/63.48MB 17:03:30 eabd8714fec9 Extracting [=> ] 13.37MB/375MB 17:03:30 eabd8714fec9 Extracting [=> ] 13.37MB/375MB 17:03:30 8b5292c940e1 Extracting [=================================================> ] 62.95MB/63.48MB 17:03:30 eabd8714fec9 Extracting [==> ] 20.61MB/375MB 17:03:30 eabd8714fec9 Extracting [==> ] 20.61MB/375MB 17:03:30 8b5292c940e1 Extracting [==================================================>] 63.48MB/63.48MB 17:03:30 8b5292c940e1 Extracting [==================================================>] 63.48MB/63.48MB 17:03:31 eabd8714fec9 Extracting [===> ] 23.95MB/375MB 17:03:31 eabd8714fec9 Extracting [===> ] 23.95MB/375MB 17:03:31 eabd8714fec9 Extracting [====> ] 37.32MB/375MB 17:03:31 eabd8714fec9 Extracting [====> ] 37.32MB/375MB 17:03:31 eabd8714fec9 Extracting [======> ] 51.81MB/375MB 17:03:31 eabd8714fec9 Extracting [======> ] 51.81MB/375MB 17:03:31 8b5292c940e1 Pull complete 17:03:31 eabd8714fec9 Extracting [========> ] 60.72MB/375MB 17:03:31 eabd8714fec9 Extracting [========> ] 60.72MB/375MB 17:03:31 454a4350d439 Extracting [==================================================>] 11.93kB/11.93kB 17:03:31 454a4350d439 Extracting [==================================================>] 11.93kB/11.93kB 17:03:31 eabd8714fec9 Extracting [=========> ] 72.97MB/375MB 17:03:31 eabd8714fec9 Extracting [=========> ] 72.97MB/375MB 17:03:31 eabd8714fec9 Extracting [===========> ] 88.01MB/375MB 17:03:31 eabd8714fec9 Extracting [===========> ] 88.01MB/375MB 17:03:31 eabd8714fec9 Extracting [=============> ] 98.6MB/375MB 17:03:31 eabd8714fec9 Extracting [=============> ] 98.6MB/375MB 17:03:31 eabd8714fec9 Extracting [==============> ] 107MB/375MB 17:03:31 eabd8714fec9 Extracting [==============> ] 107MB/375MB 17:03:32 eabd8714fec9 Extracting [===============> ] 112.5MB/375MB 17:03:32 eabd8714fec9 Extracting [===============> ] 112.5MB/375MB 17:03:32 eabd8714fec9 Extracting [===============> ] 118.7MB/375MB 17:03:32 eabd8714fec9 Extracting [===============> ] 118.7MB/375MB 17:03:32 eabd8714fec9 Extracting [================> ] 124.2MB/375MB 17:03:32 eabd8714fec9 Extracting [================> ] 124.2MB/375MB 17:03:32 eabd8714fec9 Extracting [=================> ] 130.4MB/375MB 17:03:32 eabd8714fec9 Extracting [=================> ] 130.4MB/375MB 17:03:32 454a4350d439 Pull complete 17:03:32 eabd8714fec9 Extracting [==================> ] 136.5MB/375MB 17:03:32 eabd8714fec9 Extracting [==================> ] 136.5MB/375MB 17:03:32 eabd8714fec9 Extracting [==================> ] 138.1MB/375MB 17:03:32 eabd8714fec9 Extracting [==================> ] 138.1MB/375MB 17:03:32 eabd8714fec9 Extracting [===================> ] 143.2MB/375MB 17:03:32 eabd8714fec9 Extracting [===================> ] 143.2MB/375MB 17:03:32 eabd8714fec9 Extracting [===================> ] 148.2MB/375MB 17:03:32 eabd8714fec9 Extracting [===================> ] 148.2MB/375MB 17:03:32 eabd8714fec9 Extracting [====================> ] 152.6MB/375MB 17:03:32 eabd8714fec9 Extracting [====================> ] 152.6MB/375MB 17:03:33 eabd8714fec9 Extracting [=====================> ] 157.6MB/375MB 17:03:33 eabd8714fec9 Extracting [=====================> ] 157.6MB/375MB 17:03:33 9a8c18aee5ea Extracting [==================================================>] 1.227kB/1.227kB 17:03:33 eabd8714fec9 Extracting [=====================> ] 164.3MB/375MB 17:03:33 eabd8714fec9 Extracting [=====================> ] 164.3MB/375MB 17:03:33 9a8c18aee5ea Extracting [==================================================>] 1.227kB/1.227kB 17:03:33 eabd8714fec9 Extracting [=======================> ] 172.7MB/375MB 17:03:33 eabd8714fec9 Extracting [=======================> ] 172.7MB/375MB 17:03:33 eabd8714fec9 Extracting [=========================> ] 187.7MB/375MB 17:03:33 eabd8714fec9 Extracting [=========================> ] 187.7MB/375MB 17:03:33 eabd8714fec9 Extracting [===========================> ] 203.3MB/375MB 17:03:33 eabd8714fec9 Extracting [===========================> ] 203.3MB/375MB 17:03:33 eabd8714fec9 Extracting [============================> ] 213.9MB/375MB 17:03:33 eabd8714fec9 Extracting [============================> ] 213.9MB/375MB 17:03:33 eabd8714fec9 Extracting [=============================> ] 218.9MB/375MB 17:03:33 eabd8714fec9 Extracting [=============================> ] 218.9MB/375MB 17:03:33 eabd8714fec9 Extracting [=============================> ] 223.9MB/375MB 17:03:33 eabd8714fec9 Extracting [=============================> ] 223.9MB/375MB 17:03:33 eabd8714fec9 Extracting [==============================> ] 228.4MB/375MB 17:03:33 eabd8714fec9 Extracting [==============================> ] 228.4MB/375MB 17:03:34 eabd8714fec9 Extracting [===============================> ] 234MB/375MB 17:03:34 eabd8714fec9 Extracting [===============================> ] 234MB/375MB 17:03:34 eabd8714fec9 Extracting [===============================> ] 236.2MB/375MB 17:03:34 eabd8714fec9 Extracting [===============================> ] 236.2MB/375MB 17:03:34 eabd8714fec9 Extracting [================================> ] 241.8MB/375MB 17:03:34 eabd8714fec9 Extracting [================================> ] 241.8MB/375MB 17:03:34 eabd8714fec9 Extracting [================================> ] 246.8MB/375MB 17:03:34 eabd8714fec9 Extracting [================================> ] 246.8MB/375MB 17:03:34 eabd8714fec9 Extracting [=================================> ] 252.3MB/375MB 17:03:34 eabd8714fec9 Extracting [=================================> ] 252.3MB/375MB 17:03:34 eabd8714fec9 Extracting [==================================> ] 257.9MB/375MB 17:03:34 eabd8714fec9 Extracting [==================================> ] 257.9MB/375MB 17:03:34 eabd8714fec9 Extracting [==================================> ] 262.4MB/375MB 17:03:34 eabd8714fec9 Extracting [==================================> ] 262.4MB/375MB 17:03:34 eabd8714fec9 Extracting [===================================> ] 268.5MB/375MB 17:03:34 eabd8714fec9 Extracting [===================================> ] 268.5MB/375MB 17:03:34 eabd8714fec9 Extracting [====================================> ] 270.7MB/375MB 17:03:34 eabd8714fec9 Extracting [====================================> ] 270.7MB/375MB 17:03:35 eabd8714fec9 Extracting [====================================> ] 272.4MB/375MB 17:03:35 eabd8714fec9 Extracting [====================================> ] 272.4MB/375MB 17:03:35 eabd8714fec9 Extracting [====================================> ] 274.1MB/375MB 17:03:35 eabd8714fec9 Extracting [====================================> ] 274.1MB/375MB 17:03:35 eabd8714fec9 Extracting [====================================> ] 276.9MB/375MB 17:03:35 eabd8714fec9 Extracting [====================================> ] 276.9MB/375MB 17:03:35 eabd8714fec9 Extracting [=====================================> ] 284.1MB/375MB 17:03:35 eabd8714fec9 Extracting [=====================================> ] 284.1MB/375MB 17:03:35 eabd8714fec9 Extracting [======================================> ] 291.3MB/375MB 17:03:35 eabd8714fec9 Extracting [======================================> ] 291.3MB/375MB 17:03:36 eabd8714fec9 Extracting [=======================================> ] 294.1MB/375MB 17:03:36 eabd8714fec9 Extracting [=======================================> ] 294.1MB/375MB 17:03:36 9a8c18aee5ea Pull complete 17:03:36 eabd8714fec9 Extracting [=======================================> ] 295.8MB/375MB 17:03:36 eabd8714fec9 Extracting [=======================================> ] 295.8MB/375MB 17:03:36 eabd8714fec9 Extracting [=======================================> ] 296.9MB/375MB 17:03:36 eabd8714fec9 Extracting [=======================================> ] 296.9MB/375MB 17:03:36 eabd8714fec9 Extracting [=======================================> ] 299.1MB/375MB 17:03:36 eabd8714fec9 Extracting [=======================================> ] 299.1MB/375MB 17:03:36 eabd8714fec9 Extracting [========================================> ] 300.8MB/375MB 17:03:36 eabd8714fec9 Extracting [========================================> ] 300.8MB/375MB 17:03:36 eabd8714fec9 Extracting [========================================> ] 303.6MB/375MB 17:03:36 eabd8714fec9 Extracting [========================================> ] 303.6MB/375MB 17:03:37 eabd8714fec9 Extracting [========================================> ] 306.4MB/375MB 17:03:37 eabd8714fec9 Extracting [========================================> ] 306.4MB/375MB 17:03:37 eabd8714fec9 Extracting [=========================================> ] 308.6MB/375MB 17:03:37 eabd8714fec9 Extracting [=========================================> ] 308.6MB/375MB 17:03:37 eabd8714fec9 Extracting [=========================================> ] 310.3MB/375MB 17:03:37 eabd8714fec9 Extracting [=========================================> ] 310.3MB/375MB 17:03:37 eabd8714fec9 Extracting [=========================================> ] 312MB/375MB 17:03:37 eabd8714fec9 Extracting [=========================================> ] 312MB/375MB 17:03:37 eabd8714fec9 Extracting [=========================================> ] 313.6MB/375MB 17:03:37 eabd8714fec9 Extracting [=========================================> ] 313.6MB/375MB 17:03:37 eabd8714fec9 Extracting [==========================================> ] 316.4MB/375MB 17:03:37 eabd8714fec9 Extracting [==========================================> ] 316.4MB/375MB 17:03:37 eabd8714fec9 Extracting [==========================================> ] 320.3MB/375MB 17:03:37 eabd8714fec9 Extracting [==========================================> ] 320.3MB/375MB 17:03:37 eabd8714fec9 Extracting [===========================================> ] 323.1MB/375MB 17:03:37 eabd8714fec9 Extracting [===========================================> ] 323.1MB/375MB 17:03:38 eabd8714fec9 Extracting [===========================================> ] 326.4MB/375MB 17:03:38 eabd8714fec9 Extracting [===========================================> ] 326.4MB/375MB 17:03:38 eabd8714fec9 Extracting [===========================================> ] 328.7MB/375MB 17:03:38 eabd8714fec9 Extracting [===========================================> ] 328.7MB/375MB 17:03:38 eabd8714fec9 Extracting [============================================> ] 330.9MB/375MB 17:03:38 eabd8714fec9 Extracting [============================================> ] 330.9MB/375MB 17:03:38 eabd8714fec9 Extracting [============================================> ] 332.6MB/375MB 17:03:38 eabd8714fec9 Extracting [============================================> ] 332.6MB/375MB 17:03:38 eabd8714fec9 Extracting [=============================================> ] 337.6MB/375MB 17:03:38 eabd8714fec9 Extracting [=============================================> ] 337.6MB/375MB 17:03:38 eabd8714fec9 Extracting [=============================================> ] 340.9MB/375MB 17:03:38 eabd8714fec9 Extracting [=============================================> ] 340.9MB/375MB 17:03:38 eabd8714fec9 Extracting [=============================================> ] 342MB/375MB 17:03:38 eabd8714fec9 Extracting [=============================================> ] 342MB/375MB 17:03:39 eabd8714fec9 Extracting [=============================================> ] 342.6MB/375MB 17:03:39 eabd8714fec9 Extracting [=============================================> ] 342.6MB/375MB 17:03:39 eabd8714fec9 Extracting [=============================================> ] 344.3MB/375MB 17:03:39 eabd8714fec9 Extracting [=============================================> ] 344.3MB/375MB 17:03:39 eabd8714fec9 Extracting [==============================================> ] 345.9MB/375MB 17:03:39 eabd8714fec9 Extracting [==============================================> ] 345.9MB/375MB 17:03:39 eabd8714fec9 Extracting [==============================================> ] 347.6MB/375MB 17:03:39 eabd8714fec9 Extracting [==============================================> ] 347.6MB/375MB 17:03:39 eabd8714fec9 Extracting [===============================================> ] 353.2MB/375MB 17:03:39 eabd8714fec9 Extracting [===============================================> ] 353.2MB/375MB 17:03:39 eabd8714fec9 Extracting [===============================================> ] 357.1MB/375MB 17:03:39 eabd8714fec9 Extracting [===============================================> ] 357.1MB/375MB 17:03:40 eabd8714fec9 Extracting [================================================> ] 362.6MB/375MB 17:03:40 eabd8714fec9 Extracting [================================================> ] 362.6MB/375MB 17:03:40 eabd8714fec9 Extracting [=================================================> ] 367.7MB/375MB 17:03:40 eabd8714fec9 Extracting [=================================================> ] 367.7MB/375MB 17:03:40 eabd8714fec9 Extracting [=================================================> ] 372.7MB/375MB 17:03:40 eabd8714fec9 Extracting [=================================================> ] 372.7MB/375MB 17:03:40 eabd8714fec9 Extracting [==================================================>] 375MB/375MB 17:03:40 eabd8714fec9 Extracting [==================================================>] 375MB/375MB 17:03:42 grafana Pulled 17:03:45 eabd8714fec9 Pull complete 17:03:45 eabd8714fec9 Pull complete 17:03:45 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB 17:03:45 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB 17:03:45 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB 17:03:45 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB 17:03:45 45fd2fec8a19 Pull complete 17:03:45 45fd2fec8a19 Pull complete 17:03:45 8f10199ed94b Extracting [> ] 98.3kB/8.768MB 17:03:45 8f10199ed94b Extracting [> ] 98.3kB/8.768MB 17:03:45 8f10199ed94b Extracting [==============================> ] 5.308MB/8.768MB 17:03:45 8f10199ed94b Extracting [==============================> ] 5.308MB/8.768MB 17:03:45 8f10199ed94b Extracting [==================================================>] 8.768MB/8.768MB 17:03:45 8f10199ed94b Extracting [==================================================>] 8.768MB/8.768MB 17:03:45 8f10199ed94b Pull complete 17:03:45 8f10199ed94b Pull complete 17:03:45 f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB 17:03:45 f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB 17:03:45 f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB 17:03:45 f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB 17:03:46 f963a77d2726 Pull complete 17:03:46 f963a77d2726 Pull complete 17:03:46 f3a82e9f1761 Extracting [> ] 458.8kB/44.41MB 17:03:46 f3a82e9f1761 Extracting [> ] 458.8kB/44.41MB 17:03:46 f3a82e9f1761 Extracting [=================> ] 15.14MB/44.41MB 17:03:46 f3a82e9f1761 Extracting [=================> ] 15.14MB/44.41MB 17:03:46 f3a82e9f1761 Extracting [===================================> ] 31.65MB/44.41MB 17:03:46 f3a82e9f1761 Extracting [===================================> ] 31.65MB/44.41MB 17:03:46 f3a82e9f1761 Extracting [==================================================>] 44.41MB/44.41MB 17:03:46 f3a82e9f1761 Extracting [==================================================>] 44.41MB/44.41MB 17:03:46 f3a82e9f1761 Pull complete 17:03:46 f3a82e9f1761 Pull complete 17:03:46 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB 17:03:46 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB 17:03:46 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB 17:03:46 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB 17:03:46 79161a3f5362 Pull complete 17:03:46 79161a3f5362 Pull complete 17:03:46 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB 17:03:46 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB 17:03:46 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB 17:03:46 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB 17:03:46 9c266ba63f51 Pull complete 17:03:46 9c266ba63f51 Pull complete 17:03:46 2e8a7df9c2ee Extracting [==================================================>] 851B/851B 17:03:46 2e8a7df9c2ee Extracting [==================================================>] 851B/851B 17:03:46 2e8a7df9c2ee Extracting [==================================================>] 851B/851B 17:03:46 2e8a7df9c2ee Extracting [==================================================>] 851B/851B 17:03:46 2e8a7df9c2ee Pull complete 17:03:46 2e8a7df9c2ee Pull complete 17:03:46 10f05dd8b1db Extracting [==================================================>] 98B/98B 17:03:46 10f05dd8b1db Extracting [==================================================>] 98B/98B 17:03:46 10f05dd8b1db Extracting [==================================================>] 98B/98B 17:03:46 10f05dd8b1db Extracting [==================================================>] 98B/98B 17:03:46 10f05dd8b1db Pull complete 17:03:46 10f05dd8b1db Pull complete 17:03:46 41dac8b43ba6 Extracting [==================================================>] 171B/171B 17:03:46 41dac8b43ba6 Extracting [==================================================>] 171B/171B 17:03:46 41dac8b43ba6 Extracting [==================================================>] 171B/171B 17:03:46 41dac8b43ba6 Extracting [==================================================>] 171B/171B 17:03:47 41dac8b43ba6 Pull complete 17:03:47 41dac8b43ba6 Pull complete 17:03:47 71a9f6a9ab4d Extracting [=======> ] 32.77kB/230.6kB 17:03:47 71a9f6a9ab4d Extracting [=======> ] 32.77kB/230.6kB 17:03:47 71a9f6a9ab4d Extracting [==================================================>] 230.6kB/230.6kB 17:03:47 71a9f6a9ab4d Extracting [==================================================>] 230.6kB/230.6kB 17:03:47 71a9f6a9ab4d Pull complete 17:03:47 71a9f6a9ab4d Pull complete 17:03:47 da3ed5db7103 Extracting [> ] 557.1kB/127.4MB 17:03:47 c81b87c3efcc Extracting [> ] 557.1kB/127.4MB 17:03:47 da3ed5db7103 Extracting [====> ] 12.26MB/127.4MB 17:03:47 c81b87c3efcc Extracting [===> ] 9.47MB/127.4MB 17:03:47 da3ed5db7103 Extracting [==========> ] 26.18MB/127.4MB 17:03:47 c81b87c3efcc Extracting [========> ] 20.61MB/127.4MB 17:03:47 da3ed5db7103 Extracting [===============> ] 38.44MB/127.4MB 17:03:47 c81b87c3efcc Extracting [=============> ] 34.54MB/127.4MB 17:03:47 da3ed5db7103 Extracting [====================> ] 52.92MB/127.4MB 17:03:47 c81b87c3efcc Extracting [===================> ] 49.58MB/127.4MB 17:03:47 da3ed5db7103 Extracting [===========================> ] 69.63MB/127.4MB 17:03:47 c81b87c3efcc Extracting [==========================> ] 66.85MB/127.4MB 17:03:47 da3ed5db7103 Extracting [=================================> ] 84.67MB/127.4MB 17:03:47 c81b87c3efcc Extracting [=================================> ] 84.12MB/127.4MB 17:03:47 da3ed5db7103 Extracting [======================================> ] 99.16MB/127.4MB 17:03:47 c81b87c3efcc Extracting [=======================================> ] 100.8MB/127.4MB 17:03:48 da3ed5db7103 Extracting [===========================================> ] 110.9MB/127.4MB 17:03:48 c81b87c3efcc Extracting [=============================================> ] 117MB/127.4MB 17:03:48 da3ed5db7103 Extracting [==============================================> ] 119.8MB/127.4MB 17:03:48 c81b87c3efcc Extracting [===============================================> ] 121.4MB/127.4MB 17:03:48 c81b87c3efcc Extracting [=================================================> ] 125.3MB/127.4MB 17:03:48 da3ed5db7103 Extracting [================================================> ] 123.7MB/127.4MB 17:03:48 c81b87c3efcc Extracting [==================================================>] 127.4MB/127.4MB 17:03:48 c81b87c3efcc Pull complete 17:03:48 5ee96432c7eb Extracting [==================================================>] 3.628kB/3.628kB 17:03:48 5ee96432c7eb Extracting [==================================================>] 3.628kB/3.628kB 17:03:48 da3ed5db7103 Extracting [==================================================>] 127.4MB/127.4MB 17:03:48 da3ed5db7103 Pull complete 17:03:48 c955f6e31a04 Extracting [==================================================>] 3.446kB/3.446kB 17:03:48 c955f6e31a04 Extracting [==================================================>] 3.446kB/3.446kB 17:03:48 5ee96432c7eb Pull complete 17:03:48 kafka Pulled 17:03:48 c955f6e31a04 Pull complete 17:03:48 zookeeper Pulled 17:03:48 Network compose_default Creating 17:03:48 Network compose_default Created 17:03:48 Container prometheus Creating 17:03:48 Container zookeeper Creating 17:03:48 Container mariadb Creating 17:03:48 Container simulator Creating 17:03:57 Container simulator Created 17:03:57 Container mariadb Created 17:03:57 Container policy-db-migrator Creating 17:03:57 Container zookeeper Created 17:03:57 Container kafka Creating 17:03:57 Container prometheus Created 17:03:57 Container grafana Creating 17:03:57 Container policy-db-migrator Created 17:03:57 Container policy-api Creating 17:03:57 Container kafka Created 17:03:57 Container grafana Created 17:03:57 Container policy-api Created 17:03:57 Container policy-pap Creating 17:03:57 Container policy-pap Created 17:03:57 Container policy-apex-pdp Creating 17:03:57 Container policy-apex-pdp Created 17:03:57 Container simulator Starting 17:03:57 Container mariadb Starting 17:03:57 Container prometheus Starting 17:03:57 Container zookeeper Starting 17:03:58 Container prometheus Started 17:03:58 Container grafana Starting 17:03:59 Container zookeeper Started 17:03:59 Container kafka Starting 17:04:00 Container kafka Started 17:04:00 Container grafana Started 17:04:01 Container mariadb Started 17:04:01 Container policy-db-migrator Starting 17:04:02 Container policy-db-migrator Started 17:04:02 Container policy-api Starting 17:04:03 Container policy-api Started 17:04:03 Container policy-pap Starting 17:04:03 Container simulator Started 17:04:05 Container policy-pap Started 17:04:05 Container policy-apex-pdp Starting 17:04:06 Container policy-apex-pdp Started 17:04:06 Prometheus server: http://localhost:30259 17:04:06 Grafana server: http://localhost:30269 17:04:16 Waiting for REST to come up on localhost port 30003... 17:04:16 NAMES STATUS 17:04:16 policy-apex-pdp Up 10 seconds 17:04:16 policy-pap Up 11 seconds 17:04:16 policy-api Up 13 seconds 17:04:16 grafana Up 15 seconds 17:04:16 kafka Up 16 seconds 17:04:16 zookeeper Up 16 seconds 17:04:16 simulator Up 12 seconds 17:04:16 mariadb Up 14 seconds 17:04:16 prometheus Up 17 seconds 17:04:21 NAMES STATUS 17:04:21 policy-apex-pdp Up 15 seconds 17:04:21 policy-pap Up 16 seconds 17:04:21 policy-api Up 18 seconds 17:04:21 grafana Up 21 seconds 17:04:21 kafka Up 21 seconds 17:04:21 zookeeper Up 21 seconds 17:04:21 simulator Up 17 seconds 17:04:21 mariadb Up 19 seconds 17:04:21 prometheus Up 22 seconds 17:04:26 NAMES STATUS 17:04:26 policy-apex-pdp Up 20 seconds 17:04:26 policy-pap Up 21 seconds 17:04:26 policy-api Up 23 seconds 17:04:26 grafana Up 26 seconds 17:04:26 kafka Up 26 seconds 17:04:26 zookeeper Up 26 seconds 17:04:26 simulator Up 22 seconds 17:04:26 mariadb Up 24 seconds 17:04:26 prometheus Up 27 seconds 17:04:31 NAMES STATUS 17:04:31 policy-apex-pdp Up 25 seconds 17:04:31 policy-pap Up 26 seconds 17:04:31 policy-api Up 28 seconds 17:04:31 grafana Up 31 seconds 17:04:31 kafka Up 31 seconds 17:04:31 zookeeper Up 31 seconds 17:04:31 simulator Up 27 seconds 17:04:31 mariadb Up 29 seconds 17:04:31 prometheus Up 32 seconds 17:04:36 NAMES STATUS 17:04:36 policy-apex-pdp Up 30 seconds 17:04:36 policy-pap Up 31 seconds 17:04:36 policy-api Up 33 seconds 17:04:36 grafana Up 36 seconds 17:04:36 kafka Up 36 seconds 17:04:36 zookeeper Up 36 seconds 17:04:36 simulator Up 32 seconds 17:04:36 mariadb Up 34 seconds 17:04:36 prometheus Up 37 seconds 17:04:41 NAMES STATUS 17:04:41 policy-apex-pdp Up 35 seconds 17:04:41 policy-pap Up 36 seconds 17:04:41 policy-api Up 38 seconds 17:04:41 grafana Up 41 seconds 17:04:41 kafka Up 41 seconds 17:04:41 zookeeper Up 42 seconds 17:04:41 simulator Up 37 seconds 17:04:41 mariadb Up 39 seconds 17:04:41 prometheus Up 43 seconds 17:04:41 Build docker image for robot framework 17:04:41 Error: No such image: policy-csit-robot 17:04:41 Cloning into '/w/workspace/policy-pap-newdelhi-project-csit-pap/csit/resources/tests/models'... 17:04:42 Build robot framework docker image 17:04:43 Sending build context to Docker daemon 16MB 17:04:43 Step 1/9 : FROM nexus3.onap.org:10001/library/python:3.10-slim-bullseye 17:04:43 3.10-slim-bullseye: Pulling from library/python 17:04:43 e1f16b66c2e8: Pulling fs layer 17:04:43 023041bc400d: Pulling fs layer 17:04:43 e2b6e24646ef: Pulling fs layer 17:04:43 2e8c448fc85b: Pulling fs layer 17:04:43 2e8c448fc85b: Waiting 17:04:43 023041bc400d: Verifying Checksum 17:04:43 023041bc400d: Download complete 17:04:43 2e8c448fc85b: Verifying Checksum 17:04:43 2e8c448fc85b: Download complete 17:04:43 e2b6e24646ef: Verifying Checksum 17:04:43 e2b6e24646ef: Download complete 17:04:43 e1f16b66c2e8: Verifying Checksum 17:04:43 e1f16b66c2e8: Download complete 17:04:44 e1f16b66c2e8: Pull complete 17:04:44 023041bc400d: Pull complete 17:04:45 e2b6e24646ef: Pull complete 17:04:45 2e8c448fc85b: Pull complete 17:04:45 Digest: sha256:dd4c0e03b5887369da59ac8f97f2697baf7c33c5c7659d274297e9514d40b68c 17:04:45 Status: Downloaded newer image for nexus3.onap.org:10001/library/python:3.10-slim-bullseye 17:04:45 ---> db29290af7bb 17:04:45 Step 2/9 : ARG CSIT_SCRIPT=${CSIT_SCRIPT} 17:04:46 ---> Running in 353a3ddf0f4c 17:04:46 Removing intermediate container 353a3ddf0f4c 17:04:47 ---> 3bf02ae7043a 17:04:47 Step 3/9 : ARG ROBOT_FILE=${ROBOT_FILE} 17:04:47 ---> Running in ef51d28553ba 17:04:47 Removing intermediate container ef51d28553ba 17:04:47 ---> f8df29ec8249 17:04:47 Step 4/9 : ENV ROBOT_WORKSPACE=/opt/robotworkspace ROBOT_FILE=$ROBOT_FILE CLAMP_K8S_TEST=$CLAMP_K8S_TEST 17:04:47 ---> Running in 58c579caaf3a 17:04:47 Removing intermediate container 58c579caaf3a 17:04:47 ---> a4c579393d2a 17:04:47 Step 5/9 : RUN python3 -m pip -qq install --upgrade pip && python3 -m pip -qq install --upgrade --extra-index-url="https://nexus3.onap.org/repository/PyPi.staging/simple" 'robotframework-onap==0.6.0.*' --pre && python3 -m pip -qq install --upgrade confluent-kafka && python3 -m pip freeze 17:04:47 ---> Running in 7bdb32ec1c7d 17:04:59 bcrypt==4.3.0 17:04:59 certifi==2025.4.26 17:04:59 cffi==1.17.1 17:04:59 charset-normalizer==3.4.2 17:04:59 confluent-kafka==2.10.0 17:04:59 cryptography==45.0.4 17:04:59 decorator==5.2.1 17:04:59 deepdiff==8.5.0 17:04:59 dnspython==2.7.0 17:04:59 future==1.0.0 17:04:59 idna==3.10 17:04:59 Jinja2==3.1.6 17:04:59 jsonpath-rw==1.4.0 17:04:59 kafka-python==2.2.11 17:04:59 MarkupSafe==3.0.2 17:04:59 more-itertools==5.0.0 17:04:59 orderly-set==5.4.1 17:04:59 paramiko==3.5.1 17:04:59 pbr==6.1.1 17:04:59 ply==3.11 17:04:59 protobuf==6.31.1 17:04:59 pycparser==2.22 17:04:59 PyNaCl==1.5.0 17:04:59 PyYAML==6.0.2 17:04:59 requests==2.32.4 17:04:59 robotframework==7.3 17:04:59 robotframework-onap==0.6.0.dev105 17:04:59 robotframework-requests==1.0a14 17:04:59 robotlibcore-temp==1.0.2 17:04:59 six==1.17.0 17:04:59 urllib3==2.4.0 17:05:02 Removing intermediate container 7bdb32ec1c7d 17:05:02 ---> f14a2b50765d 17:05:02 Step 6/9 : RUN mkdir -p ${ROBOT_WORKSPACE} 17:05:02 ---> Running in 883a882ea894 17:05:03 Removing intermediate container 883a882ea894 17:05:03 ---> bc87a6e14955 17:05:03 Step 7/9 : COPY scripts/run-test.sh tests/ ${ROBOT_WORKSPACE}/ 17:05:04 ---> 27a68b900e30 17:05:04 Step 8/9 : WORKDIR ${ROBOT_WORKSPACE} 17:05:04 ---> Running in ae83a5ee3a6a 17:05:04 Removing intermediate container ae83a5ee3a6a 17:05:04 ---> 274cb636aeca 17:05:04 Step 9/9 : CMD ["sh", "-c", "./run-test.sh" ] 17:05:04 ---> Running in 466dea8db9be 17:05:04 Removing intermediate container 466dea8db9be 17:05:04 ---> 60d870019995 17:05:04 Successfully built 60d870019995 17:05:04 Successfully tagged policy-csit-robot:latest 17:05:07 top - 17:05:07 up 4 min, 0 users, load average: 2.81, 1.62, 0.66 17:05:07 Tasks: 206 total, 1 running, 130 sleeping, 0 stopped, 0 zombie 17:05:07 %Cpu(s): 14.8 us, 4.0 sy, 0.0 ni, 75.4 id, 5.6 wa, 0.0 hi, 0.1 si, 0.1 st 17:05:07 17:05:07 total used free shared buff/cache available 17:05:07 Mem: 31G 2.8G 21G 1.3M 6.7G 28G 17:05:07 Swap: 1.0G 0B 1.0G 17:05:07 17:05:07 NAMES STATUS 17:05:07 policy-apex-pdp Up About a minute 17:05:07 policy-pap Up About a minute 17:05:07 policy-api Up About a minute 17:05:07 grafana Up About a minute 17:05:07 kafka Up About a minute 17:05:07 zookeeper Up About a minute 17:05:07 simulator Up About a minute 17:05:07 mariadb Up About a minute 17:05:07 prometheus Up About a minute 17:05:07 17:05:10 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 17:05:10 47c01d308dd2 policy-apex-pdp 1.52% 173.1MiB / 31.41GiB 0.54% 26.3kB / 39.5kB 0B / 0B 49 17:05:10 b0a8ae1b4118 policy-pap 3.36% 553.6MiB / 31.41GiB 1.72% 109kB / 130kB 0B / 149MB 64 17:05:10 a673f658b7e3 policy-api 0.15% 500.3MiB / 31.41GiB 1.56% 989kB / 673kB 0B / 0B 53 17:05:10 88ba7133548e grafana 0.22% 111.4MiB / 31.41GiB 0.35% 19.1MB / 152kB 0B / 30.2MB 21 17:05:10 d41d77b0e435 kafka 6.26% 390.6MiB / 31.41GiB 1.21% 126kB / 126kB 0B / 639kB 87 17:05:10 ea42dc28b429 zookeeper 0.08% 84.92MiB / 31.41GiB 0.26% 57.4kB / 49.9kB 0B / 418kB 62 17:05:10 b6adcd749397 simulator 0.09% 120.1MiB / 31.41GiB 0.37% 1.27kB / 0B 0B / 0B 77 17:05:10 5a95574edcd7 mariadb 0.02% 102.2MiB / 31.41GiB 0.32% 970kB / 1.22MB 11MB / 71.3MB 31 17:05:10 031ff0b7a123 prometheus 0.05% 20.53MiB / 31.41GiB 0.06% 67.6kB / 3.05kB 4.1kB / 0B 13 17:05:10 17:05:10 Container policy-csit Creating 17:05:10 Container policy-csit Created 17:05:10 Attaching to policy-csit 17:05:11 policy-csit | Invoking the robot tests from: pap-test.robot pap-slas.robot 17:05:11 policy-csit | Run Robot test 17:05:11 policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies 17:05:11 policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates 17:05:11 policy-csit | -v POLICY_API_IP:policy-api:6969 17:05:11 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 17:05:11 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 17:05:11 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 17:05:11 policy-csit | -v APEX_IP:policy-apex-pdp:6969 17:05:11 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 17:05:11 policy-csit | -v KAFKA_IP:kafka:9092 17:05:11 policy-csit | -v PROMETHEUS_IP:prometheus:9090 17:05:11 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 17:05:11 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 17:05:11 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 17:05:11 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 17:05:11 policy-csit | -v TEMP_FOLDER:/tmp/distribution 17:05:11 policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 17:05:11 policy-csit | -v CLAMP_K8S_TEST: 17:05:11 policy-csit | Starting Robot test suites ... 17:05:11 policy-csit | ============================================================================== 17:05:11 policy-csit | Pap-Test & Pap-Slas 17:05:11 policy-csit | ============================================================================== 17:05:11 policy-csit | Pap-Test & Pap-Slas.Pap-Test 17:05:11 policy-csit | ============================================================================== 17:05:12 policy-csit | LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | 17:05:12 policy-csit | ------------------------------------------------------------------------------ 17:05:12 policy-csit | LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | 17:05:12 policy-csit | ------------------------------------------------------------------------------ 17:05:13 policy-csit | LoadNodeTemplates :: Create node templates in database using speci... | PASS | 17:05:13 policy-csit | ------------------------------------------------------------------------------ 17:05:13 policy-csit | Healthcheck :: Verify policy pap health check | PASS | 17:05:13 policy-csit | ------------------------------------------------------------------------------ 17:05:33 policy-csit | Consolidated Healthcheck :: Verify policy consolidated health check | PASS | 17:05:33 policy-csit | ------------------------------------------------------------------------------ 17:05:33 policy-csit | Metrics :: Verify policy pap is exporting prometheus metrics | PASS | 17:05:34 policy-csit | ------------------------------------------------------------------------------ 17:05:34 policy-csit | AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | 17:05:34 policy-csit | ------------------------------------------------------------------------------ 17:05:34 policy-csit | QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | 17:05:34 policy-csit | ------------------------------------------------------------------------------ 17:05:34 policy-csit | ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | 17:05:34 policy-csit | ------------------------------------------------------------------------------ 17:05:35 policy-csit | QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | 17:05:35 policy-csit | ------------------------------------------------------------------------------ 17:05:35 policy-csit | DeployPdpGroups :: Deploy policies in PdpGroups | PASS | 17:05:35 policy-csit | ------------------------------------------------------------------------------ 17:05:35 policy-csit | QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | 17:05:35 policy-csit | ------------------------------------------------------------------------------ 17:05:35 policy-csit | QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | 17:05:35 policy-csit | ------------------------------------------------------------------------------ 17:05:35 policy-csit | QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | 17:05:35 policy-csit | ------------------------------------------------------------------------------ 17:05:36 policy-csit | UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | 17:05:36 policy-csit | ------------------------------------------------------------------------------ 17:05:36 policy-csit | UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | 17:05:36 policy-csit | ------------------------------------------------------------------------------ 17:05:36 policy-csit | QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | 17:05:36 policy-csit | ------------------------------------------------------------------------------ 17:05:36 policy-csit | QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | PASS | 17:05:36 policy-csit | ------------------------------------------------------------------------------ 17:05:36 policy-csit | QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | 17:05:36 policy-csit | ------------------------------------------------------------------------------ 17:05:36 policy-csit | DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | 17:05:36 policy-csit | ------------------------------------------------------------------------------ 17:05:37 policy-csit | DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | 17:05:37 policy-csit | ------------------------------------------------------------------------------ 17:05:37 policy-csit | QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | 17:05:37 policy-csit | ------------------------------------------------------------------------------ 17:05:37 policy-csit | Pap-Test & Pap-Slas.Pap-Test | PASS | 17:05:37 policy-csit | 22 tests, 22 passed, 0 failed 17:05:37 policy-csit | ============================================================================== 17:05:37 policy-csit | Pap-Test & Pap-Slas.Pap-Slas 17:05:37 policy-csit | ============================================================================== 17:06:37 policy-csit | WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | 17:06:37 policy-csit | ------------------------------------------------------------------------------ 17:06:37 policy-csit | ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | 17:06:37 policy-csit | ------------------------------------------------------------------------------ 17:06:37 policy-csit | ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | 17:06:37 policy-csit | ------------------------------------------------------------------------------ 17:06:37 policy-csit | ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | 17:06:37 policy-csit | ------------------------------------------------------------------------------ 17:06:37 policy-csit | ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | 17:06:37 policy-csit | ------------------------------------------------------------------------------ 17:06:37 policy-csit | ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | 17:06:37 policy-csit | ------------------------------------------------------------------------------ 17:06:37 policy-csit | ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | 17:06:37 policy-csit | ------------------------------------------------------------------------------ 17:06:37 policy-csit | ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | 17:06:37 policy-csit | ------------------------------------------------------------------------------ 17:06:37 policy-csit | Pap-Test & Pap-Slas.Pap-Slas | PASS | 17:06:37 policy-csit | 8 tests, 8 passed, 0 failed 17:06:37 policy-csit | ============================================================================== 17:06:37 policy-csit | Pap-Test & Pap-Slas | PASS | 17:06:37 policy-csit | 30 tests, 30 passed, 0 failed 17:06:37 policy-csit | ============================================================================== 17:06:37 policy-csit | Output: /tmp/results/output.xml 17:06:37 policy-csit | Log: /tmp/results/log.html 17:06:37 policy-csit | Report: /tmp/results/report.html 17:06:37 policy-csit | RESULT: 0 17:06:37 policy-csit exited with code 0 17:06:37 NAMES STATUS 17:06:37 policy-apex-pdp Up 2 minutes 17:06:37 policy-pap Up 2 minutes 17:06:37 policy-api Up 2 minutes 17:06:37 grafana Up 2 minutes 17:06:37 kafka Up 2 minutes 17:06:37 zookeeper Up 2 minutes 17:06:37 simulator Up 2 minutes 17:06:37 mariadb Up 2 minutes 17:06:37 prometheus Up 2 minutes 17:06:37 Shut down started! 17:06:40 Collecting logs from docker compose containers... 17:06:43 ======== Logs from grafana ======== 17:06:43 grafana | logger=settings t=2025-06-10T17:04:00.835098495Z level=info msg="Starting Grafana" version=12.0.1 commit=80658a73c5355e3ed318e5e021c0866285153b57 branch=HEAD compiled=2025-06-10T17:04:00Z 17:06:43 grafana | logger=settings t=2025-06-10T17:04:00.835778975Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini 17:06:43 grafana | logger=settings t=2025-06-10T17:04:00.835793865Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini 17:06:43 grafana | logger=settings t=2025-06-10T17:04:00.835799645Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" 17:06:43 grafana | logger=settings t=2025-06-10T17:04:00.835805895Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" 17:06:43 grafana | logger=settings t=2025-06-10T17:04:00.835811635Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" 17:06:43 grafana | logger=settings t=2025-06-10T17:04:00.835817375Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" 17:06:43 grafana | logger=settings t=2025-06-10T17:04:00.835831255Z level=info msg="Config overridden from command line" arg="default.log.mode=console" 17:06:43 grafana | logger=settings t=2025-06-10T17:04:00.835837375Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" 17:06:43 grafana | logger=settings t=2025-06-10T17:04:00.835843186Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" 17:06:43 grafana | logger=settings t=2025-06-10T17:04:00.835850776Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 17:06:43 grafana | logger=settings t=2025-06-10T17:04:00.835860656Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 17:06:43 grafana | logger=settings t=2025-06-10T17:04:00.835866706Z level=info msg=Target target=[all] 17:06:43 grafana | logger=settings t=2025-06-10T17:04:00.835893096Z level=info msg="Path Home" path=/usr/share/grafana 17:06:43 grafana | logger=settings t=2025-06-10T17:04:00.835900746Z level=info msg="Path Data" path=/var/lib/grafana 17:06:43 grafana | logger=settings t=2025-06-10T17:04:00.835905686Z level=info msg="Path Logs" path=/var/log/grafana 17:06:43 grafana | logger=settings t=2025-06-10T17:04:00.835910757Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins 17:06:43 grafana | logger=settings t=2025-06-10T17:04:00.835920077Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning 17:06:43 grafana | logger=settings t=2025-06-10T17:04:00.835937827Z level=info msg="App mode production" 17:06:43 grafana | logger=featuremgmt t=2025-06-10T17:04:00.836708538Z level=info msg=FeatureToggles recordedQueriesMulti=true annotationPermissionUpdate=true alertingRulePermanentlyDelete=true dataplaneFrontendFallback=true useSessionStorageForRedirection=true transformationsRedesign=true grafanaconThemes=true dashgpt=true cloudWatchCrossAccountQuerying=true prometheusUsesCombobox=true logsContextDatasourceUi=true correlations=true alertingNotificationsStepMode=true alertingApiServer=true externalCorePlugins=true awsAsyncQueryCaching=true groupToNestedTableTransformation=true logRowsPopoverMenu=true tlsMemcached=true dashboardSceneForViewers=true pinNavItems=true nestedFolders=true newDashboardSharingComponent=true lokiStructuredMetadata=true reportingUseRawTimeRange=true cloudWatchNewLabelParsing=true logsPanelControls=true pluginsDetailsRightPanel=true preinstallAutoUpdate=true influxdbBackendMigration=true azureMonitorEnableUserAuth=true publicDashboardsScene=true alertingInsights=true dashboardSceneSolo=true alertingRuleVersionHistoryRestore=true alertRuleRestore=true ssoSettingsSAML=true recoveryThreshold=true cloudWatchRoundUpEndTime=true unifiedStorageSearchPermissionFiltering=true promQLScope=true alertingRuleRecoverDeleted=true kubernetesClientDashboardsFolders=true alertingQueryAndExpressionsStepMode=true logsInfiniteScrolling=true alertingSimplifiedRouting=true formatString=true unifiedRequestLog=true prometheusAzureOverrideAudience=true lokiQuerySplitting=true newPDFRendering=true dashboardScene=true azureMonitorPrometheusExemplars=true kubernetesPlaylists=true onPremToCloudMigrations=true newFiltersUI=true addFieldFromCalculationStatFunctions=true lokiLabelNamesQueryApi=true panelMonitoring=true ssoSettingsApi=true angularDeprecationUI=true failWrongDSUID=true lokiQueryHints=true alertingUIOptimizeReducer=true logsExploreTableVisualisation=true 17:06:43 grafana | logger=sqlstore t=2025-06-10T17:04:00.836818779Z level=info msg="Connecting to DB" dbtype=sqlite3 17:06:43 grafana | logger=sqlstore t=2025-06-10T17:04:00.83683932Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:00.839552698Z level=info msg="Locking database" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:00.839578359Z level=info msg="Starting DB migrations" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:00.840553592Z level=info msg="Executing migration" id="create migration_log table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:00.841720729Z level=info msg="Migration successfully executed" id="create migration_log table" duration=1.166927ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:00.848326843Z level=info msg="Executing migration" id="create user table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:00.848931642Z level=info msg="Migration successfully executed" id="create user table" duration=604.539µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:00.852229879Z level=info msg="Executing migration" id="add unique index user.login" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:00.853263774Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=1.033255ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:00.856816024Z level=info msg="Executing migration" id="add unique index user.email" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:00.858015491Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=1.198497ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:00.863578551Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:00.864520414Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=943.413µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:00.867807611Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:00.868900537Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=1.092976ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:00.87263757Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:00.876347463Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=3.709763ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:00.879982185Z level=info msg="Executing migration" id="create user table v2" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:00.880875597Z level=info msg="Migration successfully executed" id="create user table v2" duration=893.262µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:00.886306875Z level=info msg="Executing migration" id="create index UQE_user_login - v2" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:00.887461531Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=1.153956ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:00.891383717Z level=info msg="Executing migration" id="create index UQE_user_email - v2" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:00.892512103Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=1.130656ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:00.89578278Z level=info msg="Executing migration" id="copy data_source v1 to v2" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:00.896154545Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=371.775µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:00.901249267Z level=info msg="Executing migration" id="Drop old table user_v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:00.901782795Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=533.378µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:00.904889509Z level=info msg="Executing migration" id="Add column help_flags1 to user table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:00.906040526Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.148337ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:00.909442724Z level=info msg="Executing migration" id="Update user table charset" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:00.909506245Z level=info msg="Migration successfully executed" id="Update user table charset" duration=47.391µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:00.913021665Z level=info msg="Executing migration" id="Add last_seen_at column to user" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:00.915248687Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=2.226082ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:00.921450005Z level=info msg="Executing migration" id="Add missing user data" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:00.921602667Z level=info msg="Migration successfully executed" id="Add missing user data" duration=152.432µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:00.925778727Z level=info msg="Executing migration" id="Add is_disabled column to user" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:00.926541458Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=760.231µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:00.928855871Z level=info msg="Executing migration" id="Add index user.login/user.email" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:00.930330882Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=1.475981ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:00.935706098Z level=info msg="Executing migration" id="Add is_service_account column to user" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:00.937029667Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.323879ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:00.939951399Z level=info msg="Executing migration" id="Update is_service_account column to nullable" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:00.948211267Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=8.257618ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:00.951296761Z level=info msg="Executing migration" id="Add uid column to user" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:00.952489418Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=1.192216ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:00.955684823Z level=info msg="Executing migration" id="Update uid column values for users" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:00.956009728Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=324.385µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:00.961662598Z level=info msg="Executing migration" id="Add unique index user_uid" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:00.962522311Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=857.302µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:00.966035371Z level=info msg="Executing migration" id="Add is_provisioned column to user" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:00.96741149Z level=info msg="Migration successfully executed" id="Add is_provisioned column to user" duration=1.376319ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:00.97088066Z level=info msg="Executing migration" id="update login field with orgid to allow for multiple service accounts with same name across orgs" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:00.971333046Z level=info msg="Migration successfully executed" id="update login field with orgid to allow for multiple service accounts with same name across orgs" duration=453.836µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:00.974610903Z level=info msg="Executing migration" id="update service accounts login field orgid to appear only once" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:00.975292952Z level=info msg="Migration successfully executed" id="update service accounts login field orgid to appear only once" duration=682.599µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:00.980309404Z level=info msg="Executing migration" id="update login and email fields to lowercase" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:00.981247777Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase" duration=902.383µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:00.984736537Z level=info msg="Executing migration" id="update login and email fields to lowercase2" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:00.985361636Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase2" duration=624.879µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:00.988705154Z level=info msg="Executing migration" id="create temp user table v1-7" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:00.989560576Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=854.952µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:00.995134645Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:00.995924426Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=789.541µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:00.999371656Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.000579293Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=1.206877ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.003755708Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.004956705Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=1.200577ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.007967588Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.008917781Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=950.093µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.014061834Z level=info msg="Executing migration" id="Update temp_user table charset" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.014087175Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=25.911µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.017572744Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.018325124Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=752.2µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.020686468Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.021390808Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=703.92µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.026405389Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.027114449Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=708.8µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.030516097Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.031347369Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=829.262µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.034690016Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.040606569Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=5.915893ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.0456115Z level=info msg="Executing migration" id="create temp_user v2" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.046704626Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=1.092836ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.049556266Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.051379612Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=1.819586ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.05475527Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.055906166Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=1.147716ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.061598677Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.062610011Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=1.010535ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.065815246Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.066629108Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=812.862µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.069716551Z level=info msg="Executing migration" id="copy temp_user v1 to v2" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.070160837Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=443.706µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.075619865Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.076201413Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=580.568µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.079311367Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.079852805Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=540.978µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.083201932Z level=info msg="Executing migration" id="create star table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.084234166Z level=info msg="Migration successfully executed" id="create star table" duration=1.031554ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.08732761Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.088354605Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=1.026335ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.093229583Z level=info msg="Executing migration" id="Add column dashboard_uid in star" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.094759535Z level=info msg="Migration successfully executed" id="Add column dashboard_uid in star" duration=1.529382ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.097990371Z level=info msg="Executing migration" id="Add column org_id in star" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.099436122Z level=info msg="Migration successfully executed" id="Add column org_id in star" duration=1.445181ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.102464014Z level=info msg="Executing migration" id="Add column updated in star" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.103911975Z level=info msg="Migration successfully executed" id="Add column updated in star" duration=1.447841ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.106876627Z level=info msg="Executing migration" id="add index in star table on dashboard_uid, org_id and user_id columns" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.107695348Z level=info msg="Migration successfully executed" id="add index in star table on dashboard_uid, org_id and user_id columns" duration=819.741µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.113404069Z level=info msg="Executing migration" id="create org table v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.11493576Z level=info msg="Migration successfully executed" id="create org table v1" duration=1.532451ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.118577352Z level=info msg="Executing migration" id="create index UQE_org_name - v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.11982544Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=1.247898ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.123164307Z level=info msg="Executing migration" id="create org_user table v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.123840006Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=677.669µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.127136233Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.127941074Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=804.961µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.132949055Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.133806587Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=856.882µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.136884201Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.138576945Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=1.692134ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.142324188Z level=info msg="Executing migration" id="Update org table charset" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.14249434Z level=info msg="Migration successfully executed" id="Update org table charset" duration=170.832µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.145902548Z level=info msg="Executing migration" id="Update org_user table charset" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.145982789Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=81.691µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.151529608Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.151759301Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=226.153µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.155017727Z level=info msg="Executing migration" id="create dashboard table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.156449597Z level=info msg="Migration successfully executed" id="create dashboard table" duration=1.4295ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.160360313Z level=info msg="Executing migration" id="add index dashboard.account_id" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.162047677Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=1.686064ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.165275212Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.166375998Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=1.101036ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.171532111Z level=info msg="Executing migration" id="create dashboard_tag table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.173059882Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=1.525601ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.176300528Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.1771629Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=861.922µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.179901129Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.180778412Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=876.732µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.185820043Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.1912694Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=5.449077ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.194962032Z level=info msg="Executing migration" id="create dashboard v2" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.196366272Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=1.40346ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.199556277Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.200588242Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=1.033295ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.205506811Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.206403534Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=896.473µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.210121786Z level=info msg="Executing migration" id="copy dashboard v1 to v2" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.210488692Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=366.666µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.213801188Z level=info msg="Executing migration" id="drop table dashboard_v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.21464779Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=846.522µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.221286334Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.221376286Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=90.032µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.22523361Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.228182572Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=2.949122ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.231093823Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.23301998Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.924577ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.237814948Z level=info msg="Executing migration" id="Add column gnetId in dashboard" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.241019103Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=3.202545ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.244957839Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.246214697Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=1.257868ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.249861878Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.251232748Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.37063ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.255791102Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.256684945Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=891.663µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.260726772Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.26199112Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=1.263168ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.266898939Z level=info msg="Executing migration" id="Update dashboard table charset" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.267065231Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=168.322µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.27121111Z level=info msg="Executing migration" id="Update dashboard_tag table charset" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.27123348Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=22.52µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.274459376Z level=info msg="Executing migration" id="Add column folder_id in dashboard" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.276437234Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=1.977438ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.279613629Z level=info msg="Executing migration" id="Add column isFolder in dashboard" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.281552336Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=1.936107ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.291309584Z level=info msg="Executing migration" id="Add column has_acl in dashboard" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.293974462Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=2.663578ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.297644134Z level=info msg="Executing migration" id="Add column uid in dashboard" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.300713997Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=3.068773ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.304382709Z level=info msg="Executing migration" id="Update uid column values in dashboard" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.304648803Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=269.254µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.308870702Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.309628983Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=759.421µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.312840549Z level=info msg="Executing migration" id="Remove unique index org_id_slug" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.313503828Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=663.059µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.316750994Z level=info msg="Executing migration" id="Update dashboard title length" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.316775785Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=23.721µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.320248493Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.321454991Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=1.205717ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.325914444Z level=info msg="Executing migration" id="create dashboard_provisioning" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.326957028Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=1.042324ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.330965965Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.33983332Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=8.854695ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.343163088Z level=info msg="Executing migration" id="create dashboard_provisioning v2" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.343888558Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=725.2µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.348346131Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.349379725Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=1.033724ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.353172699Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.354150043Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=976.934µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.357319048Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.357655362Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=336.254µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.361568788Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.362133706Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=564.228µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.364675942Z level=info msg="Executing migration" id="Add check_sum column" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.366759711Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=2.082989ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.370320732Z level=info msg="Executing migration" id="Add index for dashboard_title" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.371120883Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=799.711µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.375114129Z level=info msg="Executing migration" id="delete tags for deleted dashboards" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.375352403Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=238.194µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.378533698Z level=info msg="Executing migration" id="delete stars for deleted dashboards" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.378766741Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=233.043µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.381913195Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.382744267Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=830.832µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.386827195Z level=info msg="Executing migration" id="Add isPublic for dashboard" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.389006006Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.178411ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.392256792Z level=info msg="Executing migration" id="Add deleted for dashboard" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.394462663Z level=info msg="Migration successfully executed" id="Add deleted for dashboard" duration=2.201911ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.397849091Z level=info msg="Executing migration" id="Add index for deleted" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.398655902Z level=info msg="Migration successfully executed" id="Add index for deleted" duration=808.721µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.401915838Z level=info msg="Executing migration" id="Add column dashboard_uid in dashboard_tag" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.404207311Z level=info msg="Migration successfully executed" id="Add column dashboard_uid in dashboard_tag" duration=2.290903ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.40842517Z level=info msg="Executing migration" id="Add column org_id in dashboard_tag" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.410669102Z level=info msg="Migration successfully executed" id="Add column org_id in dashboard_tag" duration=2.243402ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.414154051Z level=info msg="Executing migration" id="Add missing dashboard_uid and org_id to dashboard_tag" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.414565367Z level=info msg="Migration successfully executed" id="Add missing dashboard_uid and org_id to dashboard_tag" duration=411.286µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.418039286Z level=info msg="Executing migration" id="Add apiVersion for dashboard" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.420293428Z level=info msg="Migration successfully executed" id="Add apiVersion for dashboard" duration=2.253812ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.425415701Z level=info msg="Executing migration" id="Add index for dashboard_uid on dashboard_tag table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.426344034Z level=info msg="Migration successfully executed" id="Add index for dashboard_uid on dashboard_tag table" duration=927.514µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.429805633Z level=info msg="Executing migration" id="Add missing dashboard_uid and org_id to star" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.43035139Z level=info msg="Migration successfully executed" id="Add missing dashboard_uid and org_id to star" duration=543.807µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.43389605Z level=info msg="Executing migration" id="create data_source table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.434952915Z level=info msg="Migration successfully executed" id="create data_source table" duration=1.057865ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.439943126Z level=info msg="Executing migration" id="add index data_source.account_id" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.440848649Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=904.633µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.44518266Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.446041782Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=858.972µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.449611433Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.450374593Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=762.63µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.454655104Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.456170395Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=1.515321ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.460070111Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.467534506Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=7.461335ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.471564113Z level=info msg="Executing migration" id="create data_source table v2" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.473669663Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=2.10454ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.478594873Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.479721889Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=1.124316ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.483179757Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.484303753Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=1.169796ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.488621034Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.489360365Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=739.721µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.492913435Z level=info msg="Executing migration" id="Add column with_credentials" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.49538896Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=2.503316ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.499455167Z level=info msg="Executing migration" id="Add secure json data column" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.503666557Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=4.21088ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.508325883Z level=info msg="Executing migration" id="Update data_source table charset" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.508351233Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=25.83µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.511872363Z level=info msg="Executing migration" id="Update initial version to 1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.512075636Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=203.133µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.515675047Z level=info msg="Executing migration" id="Add read_only data column" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.52095954Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=5.278153ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.52446946Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.524814675Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=345.015µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.529045475Z level=info msg="Executing migration" id="Update json_data with nulls" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.529248718Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=202.813µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.532310921Z level=info msg="Executing migration" id="Add uid column" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.534728825Z level=info msg="Migration successfully executed" id="Add uid column" duration=2.415334ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.53790971Z level=info msg="Executing migration" id="Update uid value" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.538109043Z level=info msg="Migration successfully executed" id="Update uid value" duration=198.793µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.541700954Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.543425698Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=1.723194ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.548208196Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.54989913Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=1.691604ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.555150234Z level=info msg="Executing migration" id="Add is_prunable column" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.556953709Z level=info msg="Migration successfully executed" id="Add is_prunable column" duration=1.802295ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.559955012Z level=info msg="Executing migration" id="Add api_version column" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.562401337Z level=info msg="Migration successfully executed" id="Add api_version column" duration=2.443394ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.566693937Z level=info msg="Executing migration" id="Update secure_json_data column to MediumText" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.566715107Z level=info msg="Migration successfully executed" id="Update secure_json_data column to MediumText" duration=21.7µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.570129016Z level=info msg="Executing migration" id="create api_key table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.570990718Z level=info msg="Migration successfully executed" id="create api_key table" duration=899.973µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.574530638Z level=info msg="Executing migration" id="add index api_key.account_id" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.575319339Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=786.151µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.579951495Z level=info msg="Executing migration" id="add index api_key.key" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.581283643Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=1.332158ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.586994994Z level=info msg="Executing migration" id="add index api_key.account_id_name" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.588441505Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=1.44602ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.592390741Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.593678279Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=1.287019ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.599552912Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.600643117Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=1.126036ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.60438182Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.606153645Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=1.772675ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.609466112Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.61637095Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=6.904698ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.620419677Z level=info msg="Executing migration" id="create api_key table v2" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.621199778Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=779.511µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.624455444Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.625518219Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=1.062775ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.630266286Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.631090848Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=824.682µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.634721899Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.635547101Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=822.562µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.638928318Z level=info msg="Executing migration" id="copy api_key v1 to v2" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.639288024Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=359.636µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.64258309Z level=info msg="Executing migration" id="Drop old table api_key_v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.643196179Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=611.269µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.647596271Z level=info msg="Executing migration" id="Update api_key table charset" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.647657562Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=62.251µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.651534897Z level=info msg="Executing migration" id="Add expires to api_key table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.6559731Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=4.438963ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.659213915Z level=info msg="Executing migration" id="Add service account foreign key" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.661856423Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.643018ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.666229525Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.666472748Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=243.733µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.669661193Z level=info msg="Executing migration" id="Add last_used_at to api_key table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.673907003Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=4.24402ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.677677436Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.682965821Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=5.289175ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.68713986Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.687861771Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=721.601µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.691714925Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.692848851Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=1.142876ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.697611268Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.699054979Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=1.443961ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.707760862Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.709082551Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=1.321619ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.712557379Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.71399062Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=1.432801ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.718571625Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.719368466Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=796.831µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.725611194Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.725636665Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=27.091µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.728423764Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.728470335Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=47.711µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.731859882Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.735356492Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=3.49541ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.740766149Z level=info msg="Executing migration" id="Add encrypted dashboard json column" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.744010334Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=3.241495ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.751954867Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.751993457Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=42.28µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.757949231Z level=info msg="Executing migration" id="create quota table v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.759198499Z level=info msg="Migration successfully executed" id="create quota table v1" duration=1.242928ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.763603421Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.764549805Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=946.354µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.769479145Z level=info msg="Executing migration" id="Update quota table charset" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.769506875Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=27.92µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.773588253Z level=info msg="Executing migration" id="create plugin_setting table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.774472505Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=883.772µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.77834649Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.779196202Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=849.172µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.784694599Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.789091432Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=4.397103ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.796795581Z level=info msg="Executing migration" id="Update plugin_setting table charset" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.796832421Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=38.91µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.800478893Z level=info msg="Executing migration" id="update NULL org_id to 1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.800890469Z level=info msg="Migration successfully executed" id="update NULL org_id to 1" duration=415.656µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.804032103Z level=info msg="Executing migration" id="make org_id NOT NULL and DEFAULT VALUE 1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.815016008Z level=info msg="Migration successfully executed" id="make org_id NOT NULL and DEFAULT VALUE 1" duration=10.984465ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.821168555Z level=info msg="Executing migration" id="create session table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.822664097Z level=info msg="Migration successfully executed" id="create session table" duration=1.493282ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.831046705Z level=info msg="Executing migration" id="Drop old table playlist table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.831192957Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=148.862µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.836744226Z level=info msg="Executing migration" id="Drop old table playlist_item table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.836840647Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=96.491µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.843218467Z level=info msg="Executing migration" id="create playlist table v2" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.8440872Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=871.593µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.847490427Z level=info msg="Executing migration" id="create playlist item table v2" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.848067826Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=577.979µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.851327702Z level=info msg="Executing migration" id="Update playlist table charset" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.851370672Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=44.26µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.854996454Z level=info msg="Executing migration" id="Update playlist_item table charset" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.855039324Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=45.81µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.858179609Z level=info msg="Executing migration" id="Add playlist column created_at" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.862387458Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=4.208149ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.865382091Z level=info msg="Executing migration" id="Add playlist column updated_at" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.868278631Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=2.89205ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.871070061Z level=info msg="Executing migration" id="drop preferences table v2" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.871148542Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=77.511µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.873966062Z level=info msg="Executing migration" id="drop preferences table v3" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.874045113Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=79.411µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.877640444Z level=info msg="Executing migration" id="create preferences table v3" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.878587637Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=948.533µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.883127632Z level=info msg="Executing migration" id="Update preferences table charset" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.883169012Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=44.541µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.885986422Z level=info msg="Executing migration" id="Add column team_id in preferences" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.888571049Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=2.584437ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.891320777Z level=info msg="Executing migration" id="Update team_id column values in preferences" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.891431889Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=110.552µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.895609008Z level=info msg="Executing migration" id="Add column week_start in preferences" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.899421112Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=3.808514ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.903051833Z level=info msg="Executing migration" id="Add column preferences.json_data" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.907197282Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=4.144119ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.912675189Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.91270652Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=43.811µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.918672564Z level=info msg="Executing migration" id="Add preferences index org_id" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.920199845Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=1.522101ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.923821797Z level=info msg="Executing migration" id="Add preferences index user_id" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.924938603Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=1.116056ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.928525894Z level=info msg="Executing migration" id="create alert table v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.929870382Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.345528ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.935515842Z level=info msg="Executing migration" id="add index alert org_id & id " 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.936523297Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=1.007005ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.939631851Z level=info msg="Executing migration" id="add index alert state" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.940584394Z level=info msg="Migration successfully executed" id="add index alert state" duration=951.763µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.943490615Z level=info msg="Executing migration" id="add index alert dashboard_id" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.944190375Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=699.37µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.949709353Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.950291251Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=581.638µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.953389425Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.954348138Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=958.353µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.959609813Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.960414145Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=804.531µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.965716719Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.974542844Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=8.824795ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.977400745Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.977988233Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=586.808µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.980838163Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.981849227Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=1.009324ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.986840268Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.987134282Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=293.574µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.990091984Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.990572371Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=479.597µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.992828903Z level=info msg="Executing migration" id="create alert_notification table v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:01.993474322Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=645.679µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.000186797Z level=info msg="Executing migration" id="Add column is_default" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.003521884Z level=info msg="Migration successfully executed" id="Add column is_default" duration=3.334137ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.006397236Z level=info msg="Executing migration" id="Add column frequency" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.010667489Z level=info msg="Migration successfully executed" id="Add column frequency" duration=4.265953ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.013977187Z level=info msg="Executing migration" id="Add column send_reminder" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.017392967Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.41491ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.021334124Z level=info msg="Executing migration" id="Add column disable_resolve_message" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.024670343Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=3.335819ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.03068044Z level=info msg="Executing migration" id="add index alert_notification org_id & name" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.031500902Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=822.482µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.034868081Z level=info msg="Executing migration" id="Update alert table charset" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.035012583Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=144.142µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.037974536Z level=info msg="Executing migration" id="Update alert_notification table charset" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.038031437Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=57.071µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.043376835Z level=info msg="Executing migration" id="create notification_journal table v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.044305538Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=928.123µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.047978632Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.049618096Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.641844ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.053014845Z level=info msg="Executing migration" id="drop alert_notification_journal" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.053867428Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=852.123µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.060753168Z level=info msg="Executing migration" id="create alert_notification_state table v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.061687431Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=936.313µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.065082081Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.066127036Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=1.041535ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.069595326Z level=info msg="Executing migration" id="Add for to alert table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.07875802Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=9.159503ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.082051707Z level=info msg="Executing migration" id="Add column uid in alert_notification" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.086967359Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=4.908522ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.093103938Z level=info msg="Executing migration" id="Update uid column values in alert_notification" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.093363842Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=259.304µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.097785096Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.098842202Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=1.056786ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.10216486Z level=info msg="Executing migration" id="Remove unique index org_id_name" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.103382648Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=1.216098ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.109584768Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.113793459Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=4.210301ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.117613345Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.117640225Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=28.17µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.121098015Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.122924672Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=1.820577ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.127440958Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.129579459Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=2.137321ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.141232249Z level=info msg="Executing migration" id="Drop old annotation table v4" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.141374971Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=142.392µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.146354424Z level=info msg="Executing migration" id="create annotation table v5" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.147367748Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=1.011484ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.152520903Z level=info msg="Executing migration" id="add index annotation 0 v3" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.154067606Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=1.546193ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.160216666Z level=info msg="Executing migration" id="add index annotation 1 v3" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.161294031Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=1.077035ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.165262059Z level=info msg="Executing migration" id="add index annotation 2 v3" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.167127656Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=1.862507ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.173994646Z level=info msg="Executing migration" id="add index annotation 3 v3" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.17562297Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.625874ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.183403283Z level=info msg="Executing migration" id="add index annotation 4 v3" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.184338437Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=934.454µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.189730245Z level=info msg="Executing migration" id="Update annotation table charset" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.189802846Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=74.371µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.195625061Z level=info msg="Executing migration" id="Add column region_id to annotation table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.202445241Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=6.82123ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.208152834Z level=info msg="Executing migration" id="Drop category_id index" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.209053367Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=900.783µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.216921302Z level=info msg="Executing migration" id="Add column tags to annotation table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.221502538Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=4.580756ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.225900433Z level=info msg="Executing migration" id="Create annotation_tag table v2" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.226693574Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=792.881µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.232480099Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.234335355Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=1.855677ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.239753414Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.241593891Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=1.840497ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.245761972Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.258982954Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=13.221332ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.265729003Z level=info msg="Executing migration" id="Create annotation_tag table v3" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.267035162Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=1.307659ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.272840076Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.274915387Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=2.075101ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.281188768Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.281530853Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=338.795µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.286235782Z level=info msg="Executing migration" id="drop table annotation_tag_v2" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.287188636Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=948.483µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.292361911Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.292799687Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=436.316µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.299303552Z level=info msg="Executing migration" id="Add created time to annotation table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.303624805Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=4.320743ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.310952052Z level=info msg="Executing migration" id="Add updated time to annotation table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.315343716Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=4.390443ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.321444645Z level=info msg="Executing migration" id="Add index for created in annotation table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.322450179Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=1.005274ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.328391906Z level=info msg="Executing migration" id="Add index for updated in annotation table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.329768616Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=1.37749ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.337093443Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.337455298Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=364.405µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.342592823Z level=info msg="Executing migration" id="Add epoch_end column" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.35335576Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=10.758217ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.356623497Z level=info msg="Executing migration" id="Add index for epoch_end" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.35817634Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=1.551793ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.364447581Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.364719895Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=271.544µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.36780604Z level=info msg="Executing migration" id="Move region to single row" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.368317837Z level=info msg="Migration successfully executed" id="Move region to single row" duration=511.597µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.373147608Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.374851473Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=1.703845ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.38016665Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.381176415Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=1.009785ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.389216902Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.390823436Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=1.606594ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.395544454Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.396669621Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=1.124077ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.403127045Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.40487768Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=1.746635ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.410580583Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.41172113Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=1.140247ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.416383918Z level=info msg="Executing migration" id="Increase tags column to length 4096" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.416404418Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=21.51µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.421758566Z level=info msg="Executing migration" id="Increase prev_state column to length 40 not null" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.421829567Z level=info msg="Migration successfully executed" id="Increase prev_state column to length 40 not null" duration=72.441µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.427047433Z level=info msg="Executing migration" id="Increase new_state column to length 40 not null" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.427075604Z level=info msg="Migration successfully executed" id="Increase new_state column to length 40 not null" duration=29.661µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.43093373Z level=info msg="Executing migration" id="create test_data table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.432008596Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.074206ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.435973943Z level=info msg="Executing migration" id="create dashboard_version table v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.437722379Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=1.746976ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.442772662Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.44394756Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=1.174848ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.451723203Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.453297146Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=1.573943ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.457131592Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.457555098Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=425.277µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.465649796Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.466043231Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=393.185µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.470212522Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.470229882Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=18.28µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.472347443Z level=info msg="Executing migration" id="Add apiVersion for dashboard_version" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.477048092Z level=info msg="Migration successfully executed" id="Add apiVersion for dashboard_version" duration=4.699589ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.484186636Z level=info msg="Executing migration" id="create team table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.484969387Z level=info msg="Migration successfully executed" id="create team table" duration=782.621µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.492406265Z level=info msg="Executing migration" id="add index team.org_id" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.493655234Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.245319ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.497714393Z level=info msg="Executing migration" id="add unique index team_org_id_name" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.498731098Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.016395ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.503116762Z level=info msg="Executing migration" id="Add column uid in team" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.508401319Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=5.286646ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.514796992Z level=info msg="Executing migration" id="Update uid column values in team" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.51533181Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=534.798µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.522696927Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.524435152Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.736985ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.529384364Z level=info msg="Executing migration" id="Add column external_uid in team" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.533198Z level=info msg="Migration successfully executed" id="Add column external_uid in team" duration=3.814386ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.536356556Z level=info msg="Executing migration" id="Add column is_provisioned in team" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.54075915Z level=info msg="Migration successfully executed" id="Add column is_provisioned in team" duration=4.402384ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.548241189Z level=info msg="Executing migration" id="create team member table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.549207353Z level=info msg="Migration successfully executed" id="create team member table" duration=965.344µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.552732455Z level=info msg="Executing migration" id="add index team_member.org_id" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.553971563Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.238458ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.558546759Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.559351431Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=804.552µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.566986932Z level=info msg="Executing migration" id="add index team_member.team_id" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.567808514Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=821.752µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.571503688Z level=info msg="Executing migration" id="Add column email to team table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.575995163Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=4.490795ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.579742798Z level=info msg="Executing migration" id="Add column external to team_member table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.584745181Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=4.993343ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.598517952Z level=info msg="Executing migration" id="Add column permission to team_member table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.603512335Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=4.991762ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.608424486Z level=info msg="Executing migration" id="add unique index team_member_user_id_org_id" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.609989869Z level=info msg="Migration successfully executed" id="add unique index team_member_user_id_org_id" duration=1.566383ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.620457531Z level=info msg="Executing migration" id="create dashboard acl table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.622959978Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=2.503887ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.630385146Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.632896642Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=2.509106ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.637886845Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.638985411Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.099616ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.643971354Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.645281053Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=1.313259ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.651628435Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.652671041Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=1.044855ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.655672235Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.656990214Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.30752ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.660715438Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.665279104Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=4.563696ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.670116725Z level=info msg="Executing migration" id="add index dashboard_permission" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.671005798Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=888.753µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.676859883Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.677913939Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=1.053665ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.687828053Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.688071296Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=243.383µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.693498266Z level=info msg="Executing migration" id="create tag table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.694285407Z level=info msg="Migration successfully executed" id="create tag table" duration=789.681µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.698367796Z level=info msg="Executing migration" id="add index tag.key_value" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.699168428Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=800.842µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.703514941Z level=info msg="Executing migration" id="create login attempt table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.704232032Z level=info msg="Migration successfully executed" id="create login attempt table" duration=718.101µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.708755508Z level=info msg="Executing migration" id="add index login_attempt.username" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.709965626Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=1.208688ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.71439458Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.716369109Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=1.972529ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.720436868Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.733856424Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=13.411515ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.739285572Z level=info msg="Executing migration" id="create login_attempt v2" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.740338608Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=1.053526ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.744440358Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.745567564Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=1.126786ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.753615121Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.753990967Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=375.696µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.758599744Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.759334415Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=735.011µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.766628091Z level=info msg="Executing migration" id="create user auth table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.767725577Z level=info msg="Migration successfully executed" id="create user auth table" duration=1.092266ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.771554463Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.772363985Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=809.241µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.776316112Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.776330452Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=14.97µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.779215665Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.783745991Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=4.529725ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.788211325Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.79195386Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=3.741895ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.794960294Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.79881635Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=3.855726ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.80227002Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.806210608Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=3.940178ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.810240186Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.811791639Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=1.548653ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.815620355Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.821196266Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=5.575391ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.826862169Z level=info msg="Executing migration" id="Add user_unique_id to user_auth" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.830952218Z level=info msg="Migration successfully executed" id="Add user_unique_id to user_auth" duration=4.089879ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.834747344Z level=info msg="Executing migration" id="create server_lock table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.835529415Z level=info msg="Migration successfully executed" id="create server_lock table" duration=779.361µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.841567843Z level=info msg="Executing migration" id="add index server_lock.operation_uid" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.842609248Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=1.040785ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.846580666Z level=info msg="Executing migration" id="create user auth token table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.847221615Z level=info msg="Migration successfully executed" id="create user auth token table" duration=640.859µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.850411122Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.851194273Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=782.911µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.854095436Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.855038509Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=942.503µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.859371282Z level=info msg="Executing migration" id="add index user_auth_token.user_id" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.860499539Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.128047ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.864249513Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.870116809Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=5.866446ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.874430592Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.875540958Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=1.109706ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.880024293Z level=info msg="Executing migration" id="add external_session_id to user_auth_token" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.884647861Z level=info msg="Migration successfully executed" id="add external_session_id to user_auth_token" duration=4.623098ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.888590508Z level=info msg="Executing migration" id="create cache_data table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.889221967Z level=info msg="Migration successfully executed" id="create cache_data table" duration=631.149µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.893117614Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.893823324Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=705.29µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.897756821Z level=info msg="Executing migration" id="create short_url table v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.89836849Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=611.459µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.90241606Z level=info msg="Executing migration" id="add index short_url.org_id-uid" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.903586437Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=1.170408ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.908005531Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.908025211Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=20.61µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.912796881Z level=info msg="Executing migration" id="delete alert_definition table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.912895252Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=98.221µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.917219085Z level=info msg="Executing migration" id="recreate alert_definition table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.917928196Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=708.591µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.921567939Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.92235607Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=787.171µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.926603042Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.927321512Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=718.26µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.934120631Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.934151622Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=32.031µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.938342033Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.939064724Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=722.631µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.943940465Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.944629555Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=686.5µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.947504366Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.94841553Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=912.514µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.952348427Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.953302031Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=953.084µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.958213793Z level=info msg="Executing migration" id="Add column paused in alert_definition" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.964273471Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=6.059058ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.968170597Z level=info msg="Executing migration" id="drop alert_definition table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.969770261Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=1.599054ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.97386447Z level=info msg="Executing migration" id="delete alert_definition_version table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.973931901Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=67.701µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.978223654Z level=info msg="Executing migration" id="recreate alert_definition_version table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.978860063Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=636.289µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.983252377Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.984460385Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.207118ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.987720232Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.988698447Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=977.914µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.992870208Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.992908948Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=39.01µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:02.99923695Z level=info msg="Executing migration" id="drop alert_definition_version table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.001731526Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=2.494276ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.006207272Z level=info msg="Executing migration" id="create alert_instance table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.006906442Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=698.81µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.011143413Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.011851113Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=707.44µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.015986273Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.016693104Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=706.731µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.020854174Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.025070725Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=4.216131ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.029954676Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.031318886Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=1.36478ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.035876672Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.036534251Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=657.369µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.041130248Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.06539067Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=24.259772ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.068188851Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.091517879Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=23.327058ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.096595253Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.098419829Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=1.822176ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.104489557Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.105748775Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=1.260088ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.111503619Z level=info msg="Executing migration" id="add current_reason column related to current_state" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.123593924Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=12.085615ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.128259532Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.136066765Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=7.804833ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.139701368Z level=info msg="Executing migration" id="create alert_rule table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.14122936Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.527502ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.145036145Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.146122421Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.088486ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.15017892Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.151421228Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.242558ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.155562028Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.156790896Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.228238ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.161017997Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.161035948Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=19.091µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.165074116Z level=info msg="Executing migration" id="add column for to alert_rule" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.171162394Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=6.087788ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.174477073Z level=info msg="Executing migration" id="add column annotations to alert_rule" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.18049186Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=6.014027ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.185188548Z level=info msg="Executing migration" id="add column labels to alert_rule" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.194908879Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=9.721181ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.202125884Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.203449243Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=1.33708ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.206607339Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.207689365Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.081236ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.213361047Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.220560731Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=7.197784ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.223592235Z level=info msg="Executing migration" id="add panel_id column to alert_rule" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.230183781Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=6.595436ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.235975665Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.238087756Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=2.11137ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.241214311Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.24805809Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=6.842869ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.251323597Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.258447221Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=7.095233ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.264235715Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.264266815Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=32.77µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.267572193Z level=info msg="Executing migration" id="create alert_rule_version table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.269304728Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.732095ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.272749818Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.274421733Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.672675ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.281074239Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.282270396Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.196807ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.285430522Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.285453393Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=23.911µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.288555128Z level=info msg="Executing migration" id="add column for to alert_rule_version" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.296929359Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=8.371261ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.302089914Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.313180185Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=11.095791ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.3197644Z level=info msg="Executing migration" id="add column labels to alert_rule_version" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.326681291Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=6.916441ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.329548482Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.336100527Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=6.550825ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.339278974Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.345268671Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=5.989227ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.349702605Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.349725595Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=23.08µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.352521136Z level=info msg="Executing migration" id=create_alert_configuration_table 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.353361918Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=840.122µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.356700876Z level=info msg="Executing migration" id="Add column default in alert_configuration" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.363450304Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=6.744668ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.367852878Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.367876639Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=24.131µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.371359509Z level=info msg="Executing migration" id="add column org_id in alert_configuration" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.378170208Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=6.810249ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.381842511Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.382799865Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=956.894µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.388948084Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.395633191Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=6.684857ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.399601399Z level=info msg="Executing migration" id=create_ngalert_configuration_table 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.401272903Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=1.673984ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.412284523Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.413968697Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.684315ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.424234996Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.434667238Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=10.435152ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.44450581Z level=info msg="Executing migration" id="create provenance_type table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.445099289Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=593.159µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.455154645Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.456812639Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.656754ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.467530654Z level=info msg="Executing migration" id="create alert_image table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.468846853Z level=info msg="Migration successfully executed" id="create alert_image table" duration=1.314799ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.479004771Z level=info msg="Executing migration" id="add unique index on token to alert_image table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.480629695Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.624923ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.490854883Z level=info msg="Executing migration" id="support longer URLs in alert_image table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.490885013Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=31.66µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.497774543Z level=info msg="Executing migration" id=create_alert_configuration_history_table 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.499451417Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=1.671174ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.505748979Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.506746193Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=996.294µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.511488812Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.512322824Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.515672443Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.516431024Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=752.671µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.519317686Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.520347791Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.029295ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.525663968Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.538625376Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=12.959538ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.541779372Z level=info msg="Executing migration" id="create library_element table v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.542608144Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=828.103µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.547900811Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.549616865Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.715435ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.553049315Z level=info msg="Executing migration" id="create library_element_connection table v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.554473126Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=1.424431ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.557347827Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.558448573Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.096506ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.561696301Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.562716005Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.019045ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.567262901Z level=info msg="Executing migration" id="increase max description length to 2048" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.567291222Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=29.031µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.571453272Z level=info msg="Executing migration" id="alter library_element model to mediumtext" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.571475832Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=23.19µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.574615248Z level=info msg="Executing migration" id="add library_element folder uid" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.58510563Z level=info msg="Migration successfully executed" id="add library_element folder uid" duration=10.490362ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.590577779Z level=info msg="Executing migration" id="populate library_element folder_uid" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.591050096Z level=info msg="Migration successfully executed" id="populate library_element folder_uid" duration=471.427µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.593922418Z level=info msg="Executing migration" id="add index library_element org_id-folder_uid-name-kind" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.59475878Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_uid-name-kind" duration=836.052µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.597626022Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.597919406Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=292.744µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.599970226Z level=info msg="Executing migration" id="create data_keys table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.600984061Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.012755ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.608011272Z level=info msg="Executing migration" id="create secrets table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.611116808Z level=info msg="Migration successfully executed" id="create secrets table" duration=3.103255ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.615642713Z level=info msg="Executing migration" id="rename data_keys name column to id" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.649995482Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=34.351868ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.654192963Z level=info msg="Executing migration" id="add name column into data_keys" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.662605764Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=8.410921ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.667749429Z level=info msg="Executing migration" id="copy data_keys id column values into name" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.667924222Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=174.303µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.671133428Z level=info msg="Executing migration" id="rename data_keys name column to label" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.705338064Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=34.205746ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.708631012Z level=info msg="Executing migration" id="rename data_keys id column back to name" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.735666175Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=27.034433ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.741303806Z level=info msg="Executing migration" id="create kv_store table v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.74222769Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=922.974µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.74570394Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.746907158Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.202688ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.74985533Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.750165595Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=310.035µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.755682165Z level=info msg="Executing migration" id="create permission table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.756930033Z level=info msg="Migration successfully executed" id="create permission table" duration=1.246248ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.760014988Z level=info msg="Executing migration" id="add unique index permission.role_id" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.761408658Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.39308ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.764384221Z level=info msg="Executing migration" id="add unique index role_id_action_scope" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.765397406Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.012975ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.768084935Z level=info msg="Executing migration" id="create role table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.768976458Z level=info msg="Migration successfully executed" id="create role table" duration=891.133µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.777122626Z level=info msg="Executing migration" id="add column display_name" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.788647373Z level=info msg="Migration successfully executed" id="add column display_name" duration=11.517417ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.792876735Z level=info msg="Executing migration" id="add column group_name" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.798410575Z level=info msg="Migration successfully executed" id="add column group_name" duration=5.53291ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.803764773Z level=info msg="Executing migration" id="add index role.org_id" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.80492866Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=1.164656ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.807998464Z level=info msg="Executing migration" id="add unique index role_org_id_name" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.808899067Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=901.963µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.812405108Z level=info msg="Executing migration" id="add index role_org_id_uid" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.813803118Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.3975ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.81671445Z level=info msg="Executing migration" id="create team role table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.817704435Z level=info msg="Migration successfully executed" id="create team role table" duration=989.375µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.822535615Z level=info msg="Executing migration" id="add index team_role.org_id" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.823641421Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.105376ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.826540973Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.827670449Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.128936ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.832967106Z level=info msg="Executing migration" id="add index team_role.team_id" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.834062752Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.095156ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.837651794Z level=info msg="Executing migration" id="create user role table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.838944253Z level=info msg="Migration successfully executed" id="create user role table" duration=1.295819ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.84355811Z level=info msg="Executing migration" id="add index user_role.org_id" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.844722167Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.160757ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.847861832Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.849211472Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.34828ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.852501489Z level=info msg="Executing migration" id="add index user_role.user_id" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.854292926Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.790906ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.859259088Z level=info msg="Executing migration" id="create builtin role table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.860270242Z level=info msg="Migration successfully executed" id="create builtin role table" duration=1.008164ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.864270201Z level=info msg="Executing migration" id="add index builtin_role.role_id" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.865413747Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.143596ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.868671474Z level=info msg="Executing migration" id="add index builtin_role.name" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.869714469Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.046995ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.872839415Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.880996673Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=8.156548ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.885663241Z level=info msg="Executing migration" id="add index builtin_role.org_id" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.886814958Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.150927ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.890098035Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.891171531Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.072926ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.894522489Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.896357706Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.834507ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.901116975Z level=info msg="Executing migration" id="add unique index role.uid" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.902212931Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.095146ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.905259815Z level=info msg="Executing migration" id="create seed assignment table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.906059397Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=799.562µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.909089151Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.910212967Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.123396ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.914201985Z level=info msg="Executing migration" id="add column hidden to role table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.922169301Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=7.963825ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.925319866Z level=info msg="Executing migration" id="permission kind migration" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.933199201Z level=info msg="Migration successfully executed" id="permission kind migration" duration=7.878585ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.936235254Z level=info msg="Executing migration" id="permission attribute migration" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.942032429Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=5.795745ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.947163403Z level=info msg="Executing migration" id="permission identifier migration" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.955260971Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=8.091687ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.958401376Z level=info msg="Executing migration" id="add permission identifier index" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.959452842Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.050766ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.96278119Z level=info msg="Executing migration" id="add permission action scope role_id index" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.963865185Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.083245ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.968014326Z level=info msg="Executing migration" id="remove permission role_id action scope index" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.969050141Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.035565ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.978507948Z level=info msg="Executing migration" id="add group mapping UID column to user_role table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.98556956Z level=info msg="Migration successfully executed" id="add group mapping UID column to user_role table" duration=7.062862ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.988912959Z level=info msg="Executing migration" id="add user_role org ID, user ID, role ID, group mapping UID index" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.989742011Z level=info msg="Migration successfully executed" id="add user_role org ID, user ID, role ID, group mapping UID index" duration=828.112µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.994606301Z level=info msg="Executing migration" id="remove user_role org ID, user ID, role ID index" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.995599376Z level=info msg="Migration successfully executed" id="remove user_role org ID, user ID, role ID index" duration=992.415µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:03.998971685Z level=info msg="Executing migration" id="create query_history table v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.000381795Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=1.40986ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.003743294Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.005434518Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.690344ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.009884852Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.009908913Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=24.151µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.01246049Z level=info msg="Executing migration" id="create query_history_details table v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.013302562Z level=info msg="Migration successfully executed" id="create query_history_details table v1" duration=841.972µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.017531403Z level=info msg="Executing migration" id="rbac disabled migrator" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.017622224Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=94.551µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.022541525Z level=info msg="Executing migration" id="teams permissions migration" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.022936001Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=394.216µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.026034795Z level=info msg="Executing migration" id="dashboard permissions" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.026488182Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=453.117µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.028679664Z level=info msg="Executing migration" id="dashboard permissions uid scopes" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.02915042Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=470.786µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.032111973Z level=info msg="Executing migration" id="drop managed folder create actions" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.032339336Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=227.433µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.037730354Z level=info msg="Executing migration" id="alerting notification permissions" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.038649817Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=919.323µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.043264674Z level=info msg="Executing migration" id="create query_history_star table v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.044196617Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=931.393µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.047511525Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.048837394Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.325549ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.052206413Z level=info msg="Executing migration" id="add column org_id in query_history_star" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.059781522Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=7.575709ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.067154128Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.067178019Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=25.731µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.071606813Z level=info msg="Executing migration" id="create correlation table v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.072743829Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.136816ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.076096807Z level=info msg="Executing migration" id="add index correlations.uid" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.077258564Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.161197ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.08043685Z level=info msg="Executing migration" id="add index correlations.source_uid" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.081612727Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.175957ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.086311635Z level=info msg="Executing migration" id="add correlation config column" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.095279894Z level=info msg="Migration successfully executed" id="add correlation config column" duration=8.967699ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.098850046Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.099671398Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=822.011µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.102714121Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.103508703Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=795.472µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.110783468Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.133207991Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=22.423363ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.136342436Z level=info msg="Executing migration" id="create correlation v2" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.137110758Z level=info msg="Migration successfully executed" id="create correlation v2" duration=764.281µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.142305712Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.143400408Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.094316ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.147533018Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.148687865Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.155227ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.151773909Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.152871005Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.100046ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.157771216Z level=info msg="Executing migration" id="copy correlation v1 to v2" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.15809067Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=318.984µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.160647997Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.161907085Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=1.258558ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.165160312Z level=info msg="Executing migration" id="add provisioning column" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.174217703Z level=info msg="Migration successfully executed" id="add provisioning column" duration=9.057381ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.179239915Z level=info msg="Executing migration" id="add type column" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.186394398Z level=info msg="Migration successfully executed" id="add type column" duration=7.156593ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.189647815Z level=info msg="Executing migration" id="create entity_events table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.190428887Z level=info msg="Migration successfully executed" id="create entity_events table" duration=780.052µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.192779781Z level=info msg="Executing migration" id="create dashboard public config v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.193892187Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.111637ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.197238055Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.197726362Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.200722745Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.201239783Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.204209455Z level=info msg="Executing migration" id="Drop old dashboard public config table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.205104058Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=894.383µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.208213653Z level=info msg="Executing migration" id="recreate dashboard public config v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.209289449Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.075626ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.212256731Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.213482959Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.227228ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.216455472Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.21771102Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.255358ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.220778504Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.221922971Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.144397ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.225253539Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.22739641Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=2.144431ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.23087392Z level=info msg="Executing migration" id="Drop public config table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.231830384Z level=info msg="Migration successfully executed" id="Drop public config table" duration=1.010645ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.23504037Z level=info msg="Executing migration" id="Recreate dashboard public config v2" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.237428075Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=2.386634ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.247028083Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.248270961Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.242648ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.251493827Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.252701095Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.206488ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.257081478Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.258328496Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.246898ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.263735744Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.285433867Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=21.698663ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.289447455Z level=info msg="Executing migration" id="add annotations_enabled column" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.297214437Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=7.765062ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.300339042Z level=info msg="Executing migration" id="add time_selection_enabled column" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.30920487Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=8.865188ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.314180572Z level=info msg="Executing migration" id="delete orphaned public dashboards" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.314434235Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=250.763µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.318510324Z level=info msg="Executing migration" id="add share column" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.331927668Z level=info msg="Migration successfully executed" id="add share column" duration=13.417714ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.336629405Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.336772897Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=143.572µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.342619232Z level=info msg="Executing migration" id="create file table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.343578206Z level=info msg="Migration successfully executed" id="create file table" duration=958.614µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.349267828Z level=info msg="Executing migration" id="file table idx: path natural pk" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.350459855Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.192467ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.354366081Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.355592659Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.226508ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.367272597Z level=info msg="Executing migration" id="create file_meta table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.368203391Z level=info msg="Migration successfully executed" id="create file_meta table" duration=931.574µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.372380851Z level=info msg="Executing migration" id="file table idx: path key" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.373501107Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.119626ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.379377222Z level=info msg="Executing migration" id="set path collation in file table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.379392362Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=15.43µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.384693469Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.384707969Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=14.66µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.391808581Z level=info msg="Executing migration" id="managed permissions migration" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.3923938Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=585.019µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.39516589Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.395512525Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=346.855µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.398436577Z level=info msg="Executing migration" id="RBAC action name migrator" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.400451966Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=2.013979ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.403628312Z level=info msg="Executing migration" id="Add UID column to playlist" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.417176827Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=13.538785ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.422680607Z level=info msg="Executing migration" id="Update uid column values in playlist" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.42294066Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=261.453µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.435321239Z level=info msg="Executing migration" id="Add index for uid in playlist" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.437184096Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.866377ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.444768315Z level=info msg="Executing migration" id="update group index for alert rules" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.445242502Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=474.527µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.450307765Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.450729801Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=416.626µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.454345453Z level=info msg="Executing migration" id="admin only folder/dashboard permission" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.455022103Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=676.66µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.459043071Z level=info msg="Executing migration" id="add action column to seed_assignment" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.469611664Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=10.566143ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.476838498Z level=info msg="Executing migration" id="add scope column to seed_assignment" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.484989625Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=8.148157ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.488585157Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.489815265Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.230168ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.493033081Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.571792017Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=78.753986ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.584395459Z level=info msg="Executing migration" id="add unique index builtin_role_name back" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.586937116Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=2.543157ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.591976539Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.59485281Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=2.870341ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.601005199Z level=info msg="Executing migration" id="add primary key to seed_assigment" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.633628359Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=32.61879ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.63781588Z level=info msg="Executing migration" id="add origin column to seed_assignment" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.646931901Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=9.111781ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.651634829Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.651976394Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=341.755µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.667745831Z level=info msg="Executing migration" id="prevent seeding OnCall access" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.668047836Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=302.715µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.674499509Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.674740852Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=241.763µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.679327379Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.679550222Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=222.803µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.683882694Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.684030666Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=147.962µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.689011238Z level=info msg="Executing migration" id="create folder table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.689763329Z level=info msg="Migration successfully executed" id="create folder table" duration=752.341µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.69335401Z level=info msg="Executing migration" id="Add index for parent_uid" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.694250253Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=895.723µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.697248137Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.698116319Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=870.753µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.702439511Z level=info msg="Executing migration" id="Update folder title length" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.702460952Z level=info msg="Migration successfully executed" id="Update folder title length" duration=20.321µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.707600566Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.708526449Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=925.923µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.711889068Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.71275813Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=868.892µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.71618465Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.717452598Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.267288ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.722566332Z level=info msg="Executing migration" id="Sync dashboard and folder table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.723038269Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=462.996µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.726328316Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.726727532Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=399.196µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.730654118Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.732002698Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=1.34952ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.736026606Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.737268484Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=1.241828ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.740406479Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.741513695Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=1.106766ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.744930894Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.746188082Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=1.256808ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.751870454Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.753017871Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.147687ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.759637646Z level=info msg="Executing migration" id="Remove unique index UQE_folder_org_id_parent_uid_title" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.760716442Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_org_id_parent_uid_title" duration=1.078646ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.764907192Z level=info msg="Executing migration" id="create anon_device table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.765740134Z level=info msg="Migration successfully executed" id="create anon_device table" duration=829.952µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.768668907Z level=info msg="Executing migration" id="add unique index anon_device.device_id" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.769728772Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.059835ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.772479182Z level=info msg="Executing migration" id="add index anon_device.updated_at" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.773508707Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.029275ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.777748898Z level=info msg="Executing migration" id="create signing_key table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.778655751Z level=info msg="Migration successfully executed" id="create signing_key table" duration=909.373µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.781737095Z level=info msg="Executing migration" id="add unique index signing_key.key_id" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.782815171Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.077756ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.785978826Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.787083482Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.104426ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.79176027Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.792248507Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=490.007µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.797049986Z level=info msg="Executing migration" id="Add folder_uid for dashboard" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.806762516Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=9.71001ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.810047514Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.810549821Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=502.527µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.816436296Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.816469936Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=34.8µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.820248411Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.821225755Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=977.254µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.827442814Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.827460965Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=18.891µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.831595244Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.832875323Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.276779ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.836710418Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.838574605Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=1.866337ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.84172062Z level=info msg="Executing migration" id="Remove unique index for dashboard_org_id_folder_uid_title_is_folder" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.842525492Z level=info msg="Migration successfully executed" id="Remove unique index for dashboard_org_id_folder_uid_title_is_folder" duration=804.712µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.846404148Z level=info msg="Executing migration" id="create sso_setting table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.847167469Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=763.471µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.850829522Z level=info msg="Executing migration" id="copy kvstore migration status to each org" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.851769685Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=940.633µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.855911665Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.857010491Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=1.099796ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.862330748Z level=info msg="Executing migration" id="managed dashboard permissions annotation actions migration" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.863037958Z level=info msg="Migration successfully executed" id="managed dashboard permissions annotation actions migration" duration=706.61µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.868191492Z level=info msg="Executing migration" id="create cloud_migration table v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.869531952Z level=info msg="Migration successfully executed" id="create cloud_migration table v1" duration=1.337059ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.875984845Z level=info msg="Executing migration" id="create cloud_migration_run table v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.877196002Z level=info msg="Migration successfully executed" id="create cloud_migration_run table v1" duration=1.210927ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.880368598Z level=info msg="Executing migration" id="add stack_id column" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.890581665Z level=info msg="Migration successfully executed" id="add stack_id column" duration=10.213047ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.896573752Z level=info msg="Executing migration" id="add region_slug column" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.903498792Z level=info msg="Migration successfully executed" id="add region_slug column" duration=6.92435ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.913368734Z level=info msg="Executing migration" id="add cluster_slug column" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.921049825Z level=info msg="Migration successfully executed" id="add cluster_slug column" duration=7.681571ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.926064737Z level=info msg="Executing migration" id="add migration uid column" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.941879985Z level=info msg="Migration successfully executed" id="add migration uid column" duration=15.814498ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.946117776Z level=info msg="Executing migration" id="Update uid column values for migration" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.946709015Z level=info msg="Migration successfully executed" id="Update uid column values for migration" duration=591.199µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.950433539Z level=info msg="Executing migration" id="Add unique index migration_uid" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.951733587Z level=info msg="Migration successfully executed" id="Add unique index migration_uid" duration=1.299858ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.955724915Z level=info msg="Executing migration" id="add migration run uid column" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.971113027Z level=info msg="Migration successfully executed" id="add migration run uid column" duration=15.382712ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.976453734Z level=info msg="Executing migration" id="Update uid column values for migration run" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.976683617Z level=info msg="Migration successfully executed" id="Update uid column values for migration run" duration=230.593µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.980195618Z level=info msg="Executing migration" id="Add unique index migration_run_uid" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.981478696Z level=info msg="Migration successfully executed" id="Add unique index migration_run_uid" duration=1.282778ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:04.985742428Z level=info msg="Executing migration" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.009067474Z level=info msg="Migration successfully executed" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" duration=23.325796ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.014277408Z level=info msg="Executing migration" id="create cloud_migration_session v2" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.015306903Z level=info msg="Migration successfully executed" id="create cloud_migration_session v2" duration=1.031075ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.021308948Z level=info msg="Executing migration" id="create index UQE_cloud_migration_session_uid - v2" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.022568836Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_session_uid - v2" duration=1.259638ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.02701305Z level=info msg="Executing migration" id="copy cloud_migration_session v1 to v2" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.027369485Z level=info msg="Migration successfully executed" id="copy cloud_migration_session v1 to v2" duration=356.575µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.030761663Z level=info msg="Executing migration" id="drop cloud_migration_session_tmp_qwerty" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.031630625Z level=info msg="Migration successfully executed" id="drop cloud_migration_session_tmp_qwerty" duration=868.822µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.035937577Z level=info msg="Executing migration" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.057027486Z level=info msg="Migration successfully executed" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" duration=21.089079ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.061564081Z level=info msg="Executing migration" id="create cloud_migration_snapshot v2" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.06223762Z level=info msg="Migration successfully executed" id="create cloud_migration_snapshot v2" duration=673.289µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.065591678Z level=info msg="Executing migration" id="create index UQE_cloud_migration_snapshot_uid - v2" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.0664195Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_snapshot_uid - v2" duration=827.422µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.073904296Z level=info msg="Executing migration" id="copy cloud_migration_snapshot v1 to v2" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.074379733Z level=info msg="Migration successfully executed" id="copy cloud_migration_snapshot v1 to v2" duration=481.347µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.079009939Z level=info msg="Executing migration" id="drop cloud_migration_snapshot_tmp_qwerty" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.079965283Z level=info msg="Migration successfully executed" id="drop cloud_migration_snapshot_tmp_qwerty" duration=955.214µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.083612364Z level=info msg="Executing migration" id="add snapshot upload_url column" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.093217091Z level=info msg="Migration successfully executed" id="add snapshot upload_url column" duration=9.604497ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.09674695Z level=info msg="Executing migration" id="add snapshot status column" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.108325954Z level=info msg="Migration successfully executed" id="add snapshot status column" duration=11.579064ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.113378107Z level=info msg="Executing migration" id="add snapshot local_directory column" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.124674637Z level=info msg="Migration successfully executed" id="add snapshot local_directory column" duration=11.29559ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.131262481Z level=info msg="Executing migration" id="add snapshot gms_snapshot_uid column" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.141166812Z level=info msg="Migration successfully executed" id="add snapshot gms_snapshot_uid column" duration=9.903471ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.145227369Z level=info msg="Executing migration" id="add snapshot encryption_key column" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.152005375Z level=info msg="Migration successfully executed" id="add snapshot encryption_key column" duration=6.778126ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.160356464Z level=info msg="Executing migration" id="add snapshot error_string column" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.170877804Z level=info msg="Migration successfully executed" id="add snapshot error_string column" duration=10.52519ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.195442683Z level=info msg="Executing migration" id="create cloud_migration_resource table v1" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.196701321Z level=info msg="Migration successfully executed" id="create cloud_migration_resource table v1" duration=1.259138ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.200354983Z level=info msg="Executing migration" id="delete cloud_migration_snapshot.result column" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.233954361Z level=info msg="Migration successfully executed" id="delete cloud_migration_snapshot.result column" duration=33.595378ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.242713645Z level=info msg="Executing migration" id="add cloud_migration_resource.name column" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.251294027Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.name column" duration=8.579932ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.256511851Z level=info msg="Executing migration" id="add cloud_migration_resource.parent_name column" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.274367465Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.parent_name column" duration=17.854804ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.279336056Z level=info msg="Executing migration" id="add cloud_migration_session.org_id column" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.288377224Z level=info msg="Migration successfully executed" id="add cloud_migration_session.org_id column" duration=9.040698ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.293476197Z level=info msg="Executing migration" id="add cloud_migration_resource.error_code column" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.306290019Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.error_code column" duration=12.811332ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.315448449Z level=info msg="Executing migration" id="increase resource_uid column length" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.31548364Z level=info msg="Migration successfully executed" id="increase resource_uid column length" duration=38.161µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.320227167Z level=info msg="Executing migration" id="alter kv_store.value to longtext" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.320241907Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=15.17µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.323032047Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.330451103Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=7.418616ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.33523407Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.346839936Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=11.600965ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.35418711Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.354653567Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=466.857µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.358921527Z level=info msg="Executing migration" id="managed folder permissions alerting silences actions migration" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.359268802Z level=info msg="Migration successfully executed" id="managed folder permissions alerting silences actions migration" duration=346.005µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.363860897Z level=info msg="Executing migration" id="add record column to alert_rule table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.374396917Z level=info msg="Migration successfully executed" id="add record column to alert_rule table" duration=10.53479ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.3830443Z level=info msg="Executing migration" id="add record column to alert_rule_version table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.390517687Z level=info msg="Migration successfully executed" id="add record column to alert_rule_version table" duration=7.467756ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.393939245Z level=info msg="Executing migration" id="add resolved_at column to alert_instance table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.40408598Z level=info msg="Migration successfully executed" id="add resolved_at column to alert_instance table" duration=10.143364ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.408001345Z level=info msg="Executing migration" id="add last_sent_at column to alert_instance table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.415294329Z level=info msg="Migration successfully executed" id="add last_sent_at column to alert_instance table" duration=7.291994ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.421545458Z level=info msg="Executing migration" id="Add scope to alert.notifications.receivers:read and alert.notifications.receivers.secrets:read" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.422086125Z level=info msg="Migration successfully executed" id="Add scope to alert.notifications.receivers:read and alert.notifications.receivers.secrets:read" duration=544.727µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.425700727Z level=info msg="Executing migration" id="add metadata column to alert_rule table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.439054397Z level=info msg="Migration successfully executed" id="add metadata column to alert_rule table" duration=13.349799ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.449229181Z level=info msg="Executing migration" id="add metadata column to alert_rule_version table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.457523869Z level=info msg="Migration successfully executed" id="add metadata column to alert_rule_version table" duration=8.292408ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.460784765Z level=info msg="Executing migration" id="delete orphaned service account permissions" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.461004148Z level=info msg="Migration successfully executed" id="delete orphaned service account permissions" duration=218.973µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.464476238Z level=info msg="Executing migration" id="adding action set permissions" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.464852723Z level=info msg="Migration successfully executed" id="adding action set permissions" duration=376.425µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.473245743Z level=info msg="Executing migration" id="create user_external_session table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.474516941Z level=info msg="Migration successfully executed" id="create user_external_session table" duration=1.270768ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.479064246Z level=info msg="Executing migration" id="increase name_id column length to 1024" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.479088556Z level=info msg="Migration successfully executed" id="increase name_id column length to 1024" duration=25.27µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.483389547Z level=info msg="Executing migration" id="increase session_id column length to 1024" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.483405587Z level=info msg="Migration successfully executed" id="increase session_id column length to 1024" duration=19.62µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.487319373Z level=info msg="Executing migration" id="remove scope from alert.notifications.receivers:create" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.487591267Z level=info msg="Migration successfully executed" id="remove scope from alert.notifications.receivers:create" duration=273.963µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.493984568Z level=info msg="Executing migration" id="add created_by column to alert_rule_version table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.501341012Z level=info msg="Migration successfully executed" id="add created_by column to alert_rule_version table" duration=7.355984ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.504623819Z level=info msg="Executing migration" id="add updated_by column to alert_rule table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.512383339Z level=info msg="Migration successfully executed" id="add updated_by column to alert_rule table" duration=7.75838ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.520278211Z level=info msg="Executing migration" id="add alert_rule_state table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.521166964Z level=info msg="Migration successfully executed" id="add alert_rule_state table" duration=888.843µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.527076348Z level=info msg="Executing migration" id="add index to alert_rule_state on org_id and rule_uid columns" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.528531609Z level=info msg="Migration successfully executed" id="add index to alert_rule_state on org_id and rule_uid columns" duration=1.454311ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.53985689Z level=info msg="Executing migration" id="add guid column to alert_rule table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.549956613Z level=info msg="Migration successfully executed" id="add guid column to alert_rule table" duration=10.101923ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.554411887Z level=info msg="Executing migration" id="add rule_guid column to alert_rule_version table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.565266141Z level=info msg="Migration successfully executed" id="add rule_guid column to alert_rule_version table" duration=10.852984ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.570474595Z level=info msg="Executing migration" id="cleanup alert_rule_version table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.570521986Z level=info msg="Rule version record limit is not set, fallback to 100" limit=0 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.570693698Z level=info msg="Cleaning up table `alert_rule_version`" batchSize=50 batches=0 keepVersions=100 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.570776699Z level=info msg="Migration successfully executed" id="cleanup alert_rule_version table" duration=230.273µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.58417864Z level=info msg="Executing migration" id="populate rule guid in alert rule table" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.585026232Z level=info msg="Migration successfully executed" id="populate rule guid in alert rule table" duration=852.912µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.588848486Z level=info msg="Executing migration" id="drop index in alert_rule_version table on rule_org_id, rule_uid and version columns" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.589701728Z level=info msg="Migration successfully executed" id="drop index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=853.172µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.594868012Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid, rule_guid and version columns" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.595763604Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid, rule_guid and version columns" duration=895.242µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.599135683Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_guid and version columns" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.600261189Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_guid and version columns" duration=1.125246ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.604686761Z level=info msg="Executing migration" id="add index in alert_rule table on guid columns" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.605798277Z level=info msg="Migration successfully executed" id="add index in alert_rule table on guid columns" duration=1.110666ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.6109468Z level=info msg="Executing migration" id="add keep_firing_for column to alert_rule" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.618209384Z level=info msg="Migration successfully executed" id="add keep_firing_for column to alert_rule" duration=7.261984ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.621330958Z level=info msg="Executing migration" id="add keep_firing_for column to alert_rule_version" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.630969695Z level=info msg="Migration successfully executed" id="add keep_firing_for column to alert_rule_version" duration=9.637767ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.636710417Z level=info msg="Executing migration" id="add missing_series_evals_to_resolve column to alert_rule" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.646215622Z level=info msg="Migration successfully executed" id="add missing_series_evals_to_resolve column to alert_rule" duration=9.505195ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.650727916Z level=info msg="Executing migration" id="add missing_series_evals_to_resolve column to alert_rule_version" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.661611111Z level=info msg="Migration successfully executed" id="add missing_series_evals_to_resolve column to alert_rule_version" duration=10.883055ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.665732509Z level=info msg="Executing migration" id="remove the datasources:drilldown action" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.665948392Z level=info msg="Removed 0 datasources:drilldown permissions" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.665960093Z level=info msg="Migration successfully executed" id="remove the datasources:drilldown action" duration=227.393µs 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.67000134Z level=info msg="Executing migration" id="remove title in folder unique index" 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.671078965Z level=info msg="Migration successfully executed" id="remove title in folder unique index" duration=1.083595ms 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.67419844Z level=info msg="migrations completed" performed=654 skipped=0 duration=4.833693628s 17:06:43 grafana | logger=migrator t=2025-06-10T17:04:05.674827989Z level=info msg="Unlocking database" 17:06:43 grafana | logger=sqlstore t=2025-06-10T17:04:05.693273611Z level=info msg="Created default admin" user=admin 17:06:43 grafana | logger=sqlstore t=2025-06-10T17:04:05.693696037Z level=info msg="Created default organization" 17:06:43 grafana | logger=secrets t=2025-06-10T17:04:05.699978796Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 17:06:43 grafana | logger=plugin.angulardetectorsprovider.dynamic t=2025-06-10T17:04:05.800214551Z level=info msg="Restored cache from database" duration=497.057µs 17:06:43 grafana | logger=resource-migrator t=2025-06-10T17:04:05.809452693Z level=info msg="Locking database" 17:06:43 grafana | logger=resource-migrator t=2025-06-10T17:04:05.809469833Z level=info msg="Starting DB migrations" 17:06:43 grafana | logger=resource-migrator t=2025-06-10T17:04:05.818117016Z level=info msg="Executing migration" id="create resource_migration_log table" 17:06:43 grafana | logger=resource-migrator t=2025-06-10T17:04:05.818909537Z level=info msg="Migration successfully executed" id="create resource_migration_log table" duration=792.381µs 17:06:43 grafana | logger=resource-migrator t=2025-06-10T17:04:05.823659754Z level=info msg="Executing migration" id="Initialize resource tables" 17:06:43 grafana | logger=resource-migrator t=2025-06-10T17:04:05.823673295Z level=info msg="Migration successfully executed" id="Initialize resource tables" duration=13.981µs 17:06:43 grafana | logger=resource-migrator t=2025-06-10T17:04:05.827394558Z level=info msg="Executing migration" id="drop table resource" 17:06:43 grafana | logger=resource-migrator t=2025-06-10T17:04:05.827454429Z level=info msg="Migration successfully executed" id="drop table resource" duration=60.121µs 17:06:43 grafana | logger=resource-migrator t=2025-06-10T17:04:05.830106376Z level=info msg="Executing migration" id="create table resource" 17:06:43 grafana | logger=resource-migrator t=2025-06-10T17:04:05.831182732Z level=info msg="Migration successfully executed" id="create table resource" duration=1.075756ms 17:06:43 grafana | logger=resource-migrator t=2025-06-10T17:04:05.834397767Z level=info msg="Executing migration" id="create table resource, index: 0" 17:06:43 grafana | logger=resource-migrator t=2025-06-10T17:04:05.835662175Z level=info msg="Migration successfully executed" id="create table resource, index: 0" duration=1.263508ms 17:06:43 grafana | logger=resource-migrator t=2025-06-10T17:04:05.839788644Z level=info msg="Executing migration" id="drop table resource_history" 17:06:43 grafana | logger=resource-migrator t=2025-06-10T17:04:05.839868105Z level=info msg="Migration successfully executed" id="drop table resource_history" duration=79.951µs 17:06:43 grafana | logger=resource-migrator t=2025-06-10T17:04:05.842719136Z level=info msg="Executing migration" id="create table resource_history" 17:06:43 grafana | logger=resource-migrator t=2025-06-10T17:04:05.844012584Z level=info msg="Migration successfully executed" id="create table resource_history" duration=1.292829ms 17:06:43 grafana | logger=resource-migrator t=2025-06-10T17:04:05.84792702Z level=info msg="Executing migration" id="create table resource_history, index: 0" 17:06:43 grafana | logger=resource-migrator t=2025-06-10T17:04:05.84939773Z level=info msg="Migration successfully executed" id="create table resource_history, index: 0" duration=1.46973ms 17:06:43 grafana | logger=resource-migrator t=2025-06-10T17:04:05.853246435Z level=info msg="Executing migration" id="create table resource_history, index: 1" 17:06:43 grafana | logger=resource-migrator t=2025-06-10T17:04:05.854455922Z level=info msg="Migration successfully executed" id="create table resource_history, index: 1" duration=1.208587ms 17:06:43 grafana | logger=resource-migrator t=2025-06-10T17:04:05.857744939Z level=info msg="Executing migration" id="drop table resource_version" 17:06:43 grafana | logger=resource-migrator t=2025-06-10T17:04:05.85782351Z level=info msg="Migration successfully executed" id="drop table resource_version" duration=78.811µs 17:06:43 grafana | logger=resource-migrator t=2025-06-10T17:04:05.862466556Z level=info msg="Executing migration" id="create table resource_version" 17:06:43 grafana | logger=resource-migrator t=2025-06-10T17:04:05.863370119Z level=info msg="Migration successfully executed" id="create table resource_version" duration=902.823µs 17:06:43 grafana | logger=resource-migrator t=2025-06-10T17:04:05.86627254Z level=info msg="Executing migration" id="create table resource_version, index: 0" 17:06:43 grafana | logger=resource-migrator t=2025-06-10T17:04:05.867412496Z level=info msg="Migration successfully executed" id="create table resource_version, index: 0" duration=1.139676ms 17:06:43 grafana | logger=resource-migrator t=2025-06-10T17:04:05.870312388Z level=info msg="Executing migration" id="drop table resource_blob" 17:06:43 grafana | logger=resource-migrator t=2025-06-10T17:04:05.87044755Z level=info msg="Migration successfully executed" id="drop table resource_blob" duration=135.412µs 17:06:43 grafana | logger=resource-migrator t=2025-06-10T17:04:05.874472047Z level=info msg="Executing migration" id="create table resource_blob" 17:06:43 grafana | logger=resource-migrator t=2025-06-10T17:04:05.875807966Z level=info msg="Migration successfully executed" id="create table resource_blob" duration=1.335159ms 17:06:43 grafana | logger=resource-migrator t=2025-06-10T17:04:05.878803609Z level=info msg="Executing migration" id="create table resource_blob, index: 0" 17:06:43 grafana | logger=resource-migrator t=2025-06-10T17:04:05.879993375Z level=info msg="Migration successfully executed" id="create table resource_blob, index: 0" duration=1.189407ms 17:06:43 grafana | logger=resource-migrator t=2025-06-10T17:04:05.888088911Z level=info msg="Executing migration" id="create table resource_blob, index: 1" 17:06:43 grafana | logger=resource-migrator t=2025-06-10T17:04:05.889367709Z level=info msg="Migration successfully executed" id="create table resource_blob, index: 1" duration=1.279778ms 17:06:43 grafana | logger=resource-migrator t=2025-06-10T17:04:05.892534814Z level=info msg="Executing migration" id="Add column previous_resource_version in resource_history" 17:06:43 grafana | logger=resource-migrator t=2025-06-10T17:04:05.903253616Z level=info msg="Migration successfully executed" id="Add column previous_resource_version in resource_history" duration=10.715612ms 17:06:43 grafana | logger=resource-migrator t=2025-06-10T17:04:06.040575732Z level=info msg="Executing migration" id="Add column previous_resource_version in resource" 17:06:43 grafana | logger=resource-migrator t=2025-06-10T17:04:06.055834655Z level=info msg="Migration successfully executed" id="Add column previous_resource_version in resource" duration=15.261102ms 17:06:43 grafana | logger=resource-migrator t=2025-06-10T17:04:06.081952807Z level=info msg="Executing migration" id="Add index to resource_history for polling" 17:06:43 grafana | logger=resource-migrator t=2025-06-10T17:04:06.083058863Z level=info msg="Migration successfully executed" id="Add index to resource_history for polling" duration=1.107456ms 17:06:43 grafana | logger=resource-migrator t=2025-06-10T17:04:06.088033846Z level=info msg="Executing migration" id="Add index to resource for loading" 17:06:43 grafana | logger=resource-migrator t=2025-06-10T17:04:06.088888358Z level=info msg="Migration successfully executed" id="Add index to resource for loading" duration=854.222µs 17:06:43 grafana | logger=resource-migrator t=2025-06-10T17:04:06.111695182Z level=info msg="Executing migration" id="Add column folder in resource_history" 17:06:43 grafana | logger=resource-migrator t=2025-06-10T17:04:06.119280093Z level=info msg="Migration successfully executed" id="Add column folder in resource_history" duration=7.585081ms 17:06:43 grafana | logger=resource-migrator t=2025-06-10T17:04:06.17030408Z level=info msg="Executing migration" id="Add column folder in resource" 17:06:43 grafana | logger=resource-migrator t=2025-06-10T17:04:06.184558958Z level=info msg="Migration successfully executed" id="Add column folder in resource" duration=14.247968ms 17:06:43 grafana | logger=resource-migrator t=2025-06-10T17:04:06.288344217Z level=info msg="Executing migration" id="Migrate DeletionMarkers to real Resource objects" 17:06:43 grafana | logger=deletion-marker-migrator t=2025-06-10T17:04:06.288386087Z level=info msg="finding any deletion markers" 17:06:43 grafana | logger=resource-migrator t=2025-06-10T17:04:06.288949375Z level=info msg="Migration successfully executed" id="Migrate DeletionMarkers to real Resource objects" duration=602.859µs 17:06:43 grafana | logger=resource-migrator t=2025-06-10T17:04:06.301222915Z level=info msg="Executing migration" id="Add index to resource_history for get trash" 17:06:43 grafana | logger=resource-migrator t=2025-06-10T17:04:06.302465723Z level=info msg="Migration successfully executed" id="Add index to resource_history for get trash" duration=1.240418ms 17:06:43 grafana | logger=resource-migrator t=2025-06-10T17:04:06.307093841Z level=info msg="Executing migration" id="Add generation to resource history" 17:06:43 grafana | logger=resource-migrator t=2025-06-10T17:04:06.32069595Z level=info msg="Migration successfully executed" id="Add generation to resource history" duration=13.601929ms 17:06:43 grafana | logger=resource-migrator t=2025-06-10T17:04:06.325127285Z level=info msg="Executing migration" id="Add generation index to resource history" 17:06:43 grafana | logger=resource-migrator t=2025-06-10T17:04:06.326221381Z level=info msg="Migration successfully executed" id="Add generation index to resource history" duration=1.093776ms 17:06:43 grafana | logger=resource-migrator t=2025-06-10T17:04:06.329970336Z level=info msg="migrations completed" performed=26 skipped=0 duration=511.897331ms 17:06:43 grafana | logger=resource-migrator t=2025-06-10T17:04:06.330764847Z level=info msg="Unlocking database" 17:06:43 grafana | t=2025-06-10T17:04:06.331384156Z level=info caller=logger.go:214 time=2025-06-10T17:04:06.331319255Z msg="Using channel notifier" logger=sql-resource-server 17:06:43 grafana | logger=plugin.store t=2025-06-10T17:04:06.344261775Z level=info msg="Loading plugins..." 17:06:43 grafana | logger=plugins.registration t=2025-06-10T17:04:06.399096967Z level=error msg="Could not register plugin" pluginId=table error="plugin table is already registered" 17:06:43 grafana | logger=plugins.initialization t=2025-06-10T17:04:06.399137258Z level=error msg="Could not initialize plugin" pluginId=table error="plugin table is already registered" 17:06:43 grafana | logger=plugin.store t=2025-06-10T17:04:06.399344501Z level=info msg="Plugins loaded" count=53 duration=55.083736ms 17:06:43 grafana | logger=query_data t=2025-06-10T17:04:06.407796374Z level=info msg="Query Service initialization" 17:06:43 grafana | logger=live.push_http t=2025-06-10T17:04:06.413796732Z level=info msg="Live Push Gateway initialization" 17:06:43 grafana | logger=ngalert.notifier.alertmanager org=1 t=2025-06-10T17:04:06.428563208Z level=info msg="Applying new configuration to Alertmanager" configHash=d2c56faca6af2a5772ff4253222f7386 17:06:43 grafana | logger=ngalert t=2025-06-10T17:04:06.434780899Z level=info msg="Using simple database alert instance store" 17:06:43 grafana | logger=ngalert.state.manager.persist t=2025-06-10T17:04:06.43482207Z level=info msg="Using sync state persister" 17:06:43 grafana | logger=infra.usagestats.collector t=2025-06-10T17:04:06.438346681Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 17:06:43 grafana | logger=plugin.backgroundinstaller t=2025-06-10T17:04:06.438892409Z level=info msg="Installing plugin" pluginId=grafana-lokiexplore-app version= 17:06:43 grafana | logger=grafanaStorageLogger t=2025-06-10T17:04:06.438794458Z level=info msg="Storage starting" 17:06:43 grafana | logger=http.server t=2025-06-10T17:04:06.445629768Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= 17:06:43 grafana | logger=ngalert.state.manager t=2025-06-10T17:04:06.446005243Z level=info msg="Warming state cache for startup" 17:06:43 grafana | logger=ngalert.multiorg.alertmanager t=2025-06-10T17:04:06.456190802Z level=info msg="Starting MultiOrg Alertmanager" 17:06:43 grafana | logger=plugins.update.checker t=2025-06-10T17:04:06.532830184Z level=info msg="Update check succeeded" duration=86.1322ms 17:06:43 grafana | logger=grafana.update.checker t=2025-06-10T17:04:06.546555295Z level=info msg="Update check succeeded" duration=106.375507ms 17:06:43 grafana | logger=sqlstore.transactions t=2025-06-10T17:04:06.558067163Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 17:06:43 grafana | logger=plugin.angulardetectorsprovider.dynamic t=2025-06-10T17:04:06.580471711Z level=info msg="Patterns update finished" duration=140.839031ms 17:06:43 grafana | logger=provisioning.datasources t=2025-06-10T17:04:06.590034961Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz 17:06:43 grafana | logger=provisioning.alerting t=2025-06-10T17:04:06.612053973Z level=info msg="starting to provision alerting" 17:06:43 grafana | logger=provisioning.alerting t=2025-06-10T17:04:06.612087053Z level=info msg="finished to provision alerting" 17:06:43 grafana | logger=provisioning.dashboard t=2025-06-10T17:04:06.613628456Z level=info msg="starting to provision dashboards" 17:06:43 grafana | logger=ngalert.state.manager t=2025-06-10T17:04:06.615754167Z level=info msg="State cache has been initialized" states=0 duration=169.749364ms 17:06:43 grafana | logger=ngalert.scheduler t=2025-06-10T17:04:06.615828608Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=3 17:06:43 grafana | logger=ticker t=2025-06-10T17:04:06.615987421Z level=info msg=starting first_tick=2025-06-10T17:04:10Z 17:06:43 grafana | logger=plugin.installer t=2025-06-10T17:04:06.91186307Z level=info msg="Installing plugin" pluginId=grafana-lokiexplore-app version= 17:06:43 grafana | logger=installer.fs t=2025-06-10T17:04:07.089513555Z level=info msg="Downloaded and extracted grafana-lokiexplore-app v1.0.17 zip successfully to /var/lib/grafana/plugins/grafana-lokiexplore-app" 17:06:43 grafana | logger=plugins.registration t=2025-06-10T17:04:07.153193363Z level=info msg="Plugin registered" pluginId=grafana-lokiexplore-app 17:06:43 grafana | logger=plugin.backgroundinstaller t=2025-06-10T17:04:07.153248954Z level=info msg="Plugin successfully installed" pluginId=grafana-lokiexplore-app version= duration=714.308774ms 17:06:43 grafana | logger=plugin.backgroundinstaller t=2025-06-10T17:04:07.153280454Z level=info msg="Installing plugin" pluginId=grafana-pyroscope-app version= 17:06:43 grafana | logger=plugin.installer t=2025-06-10T17:04:07.36708244Z level=info msg="Installing plugin" pluginId=grafana-pyroscope-app version= 17:06:43 grafana | logger=installer.fs t=2025-06-10T17:04:07.418491619Z level=info msg="Downloaded and extracted grafana-pyroscope-app v1.4.1 zip successfully to /var/lib/grafana/plugins/grafana-pyroscope-app" 17:06:43 grafana | logger=plugins.registration t=2025-06-10T17:04:07.438511431Z level=info msg="Plugin registered" pluginId=grafana-pyroscope-app 17:06:43 grafana | logger=plugin.backgroundinstaller t=2025-06-10T17:04:07.438531411Z level=info msg="Plugin successfully installed" pluginId=grafana-pyroscope-app version= duration=285.245897ms 17:06:43 grafana | logger=plugin.backgroundinstaller t=2025-06-10T17:04:07.438551041Z level=info msg="Installing plugin" pluginId=grafana-exploretraces-app version= 17:06:43 grafana | logger=grafana-apiserver t=2025-06-10T17:04:07.522476245Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" 17:06:43 grafana | logger=grafana-apiserver t=2025-06-10T17:04:07.525718122Z level=info msg="Adding GroupVersion dashboard.grafana.app v1beta1 to ResourceManager" 17:06:43 grafana | logger=grafana-apiserver t=2025-06-10T17:04:07.526456352Z level=info msg="Adding GroupVersion dashboard.grafana.app v0alpha1 to ResourceManager" 17:06:43 grafana | logger=grafana-apiserver t=2025-06-10T17:04:07.526866758Z level=info msg="Adding GroupVersion dashboard.grafana.app v2alpha1 to ResourceManager" 17:06:43 grafana | logger=grafana-apiserver t=2025-06-10T17:04:07.527307645Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" 17:06:43 grafana | logger=grafana-apiserver t=2025-06-10T17:04:07.528187258Z level=info msg="Adding GroupVersion folder.grafana.app v1beta1 to ResourceManager" 17:06:43 grafana | logger=grafana-apiserver t=2025-06-10T17:04:07.535063688Z level=info msg="Adding GroupVersion iam.grafana.app v0alpha1 to ResourceManager" 17:06:43 grafana | logger=grafana-apiserver t=2025-06-10T17:04:07.540490987Z level=info msg="Adding GroupVersion notifications.alerting.grafana.app v0alpha1 to ResourceManager" 17:06:43 grafana | logger=grafana-apiserver t=2025-06-10T17:04:07.541516202Z level=info msg="Adding GroupVersion userstorage.grafana.app v0alpha1 to ResourceManager" 17:06:43 grafana | logger=app-registry t=2025-06-10T17:04:07.578551252Z level=info msg="app registry initialized" 17:06:43 grafana | logger=plugin.installer t=2025-06-10T17:04:07.782589965Z level=info msg="Installing plugin" pluginId=grafana-exploretraces-app version= 17:06:43 grafana | logger=installer.fs t=2025-06-10T17:04:07.846864782Z level=info msg="Downloaded and extracted grafana-exploretraces-app v1.0.0 zip successfully to /var/lib/grafana/plugins/grafana-exploretraces-app" 17:06:43 grafana | logger=plugins.registration t=2025-06-10T17:04:07.947827544Z level=info msg="Plugin registered" pluginId=grafana-exploretraces-app 17:06:43 grafana | logger=plugin.backgroundinstaller t=2025-06-10T17:04:07.947852904Z level=info msg="Plugin successfully installed" pluginId=grafana-exploretraces-app version= duration=509.297512ms 17:06:43 grafana | logger=plugin.backgroundinstaller t=2025-06-10T17:04:07.947870934Z level=info msg="Installing plugin" pluginId=grafana-metricsdrilldown-app version= 17:06:43 grafana | logger=plugin.installer t=2025-06-10T17:04:08.134796159Z level=info msg="Installing plugin" pluginId=grafana-metricsdrilldown-app version= 17:06:43 grafana | logger=installer.fs t=2025-06-10T17:04:08.204107255Z level=info msg="Downloaded and extracted grafana-metricsdrilldown-app v1.0.1 zip successfully to /var/lib/grafana/plugins/grafana-metricsdrilldown-app" 17:06:43 grafana | logger=plugins.registration t=2025-06-10T17:04:08.224501971Z level=info msg="Plugin registered" pluginId=grafana-metricsdrilldown-app 17:06:43 grafana | logger=plugin.backgroundinstaller t=2025-06-10T17:04:08.224527161Z level=info msg="Plugin successfully installed" pluginId=grafana-metricsdrilldown-app version= duration=276.649947ms 17:06:43 grafana | logger=provisioning.dashboard t=2025-06-10T17:04:08.272811382Z level=info msg="finished to provision dashboards" 17:06:43 grafana | logger=infra.usagestats t=2025-06-10T17:05:40.45623574Z level=info msg="Usage stats are ready to report" 17:06:43 =================================== 17:06:43 ======== Logs from kafka ======== 17:06:43 kafka | ===> User 17:06:43 kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 17:06:43 kafka | ===> Configuring ... 17:06:43 kafka | Running in Zookeeper mode... 17:06:43 kafka | ===> Running preflight checks ... 17:06:43 kafka | ===> Check if /var/lib/kafka/data is writable ... 17:06:43 kafka | ===> Check if Zookeeper is healthy ... 17:06:43 kafka | [2025-06-10 17:04:04,055] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) 17:06:43 kafka | [2025-06-10 17:04:04,056] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) 17:06:43 kafka | [2025-06-10 17:04:04,056] INFO Client environment:java.version=17.0.14 (org.apache.zookeeper.ZooKeeper) 17:06:43 kafka | [2025-06-10 17:04:04,056] INFO Client environment:java.vendor=Eclipse Adoptium (org.apache.zookeeper.ZooKeeper) 17:06:43 kafka | [2025-06-10 17:04:04,056] INFO Client environment:java.home=/usr/lib/jvm/temurin-17-jre (org.apache.zookeeper.ZooKeeper) 17:06:43 kafka | [2025-06-10 17:04:04,056] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/kafka_2.13-7.9.1-ccs.jar:/usr/share/java/cp-base-new/commons-validator-1.7.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/kafka-transaction-coordinator-7.9.1-ccs.jar:/usr/share/java/cp-base-new/kafka-clients-7.9.1-ccs.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/jackson-core-2.16.0.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.9.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.16.0.jar:/usr/share/java/cp-base-new/kafka-metadata-7.9.1-ccs.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-databind-2.16.0.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.11.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.9.1-ccs.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.4.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.16.0.jar:/usr/share/java/cp-base-new/common-utils-7.9.1.jar:/usr/share/java/cp-base-new/kafka-server-common-7.9.1-ccs.jar:/usr/share/java/cp-base-new/scala-library-2.13.11.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/commons-beanutils-1.9.4.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.5.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/utility-belt-7.9.1-52.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.16.0.jar:/usr/share/java/cp-base-new/kafka-server-7.9.1-ccs.jar:/usr/share/java/cp-base-new/commons-io-2.16.0.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.9.1-ccs.jar:/usr/share/java/cp-base-new/commons-digester-2.1.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.9.1-ccs.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.16.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/jackson-annotations-2.16.0.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.6-4.jar:/usr/share/java/cp-base-new/zookeeper-3.8.4.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/error_prone_annotations-2.10.0.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-api-7.9.1-ccs.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/pcollections-4.0.1.jar:/usr/share/java/cp-base-new/commons-logging-1.2.jar:/usr/share/java/cp-base-new/kafka-raft-7.9.1-ccs.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/commons-collections-3.2.2.jar:/usr/share/java/cp-base-new/caffeine-2.9.3.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar:/usr/share/java/cp-base-new/jose4j-0.9.5.jar:/usr/share/java/cp-base-new/kafka-storage-7.9.1-ccs.jar (org.apache.zookeeper.ZooKeeper) 17:06:43 kafka | [2025-06-10 17:04:04,056] INFO Client environment:java.library.path=/usr/local/lib64:/usr/local/lib::/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 17:06:43 kafka | [2025-06-10 17:04:04,056] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 17:06:43 kafka | [2025-06-10 17:04:04,056] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 17:06:43 kafka | [2025-06-10 17:04:04,056] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 17:06:43 kafka | [2025-06-10 17:04:04,056] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 17:06:43 kafka | [2025-06-10 17:04:04,056] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 17:06:43 kafka | [2025-06-10 17:04:04,056] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 17:06:43 kafka | [2025-06-10 17:04:04,056] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 17:06:43 kafka | [2025-06-10 17:04:04,057] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 17:06:43 kafka | [2025-06-10 17:04:04,057] INFO Client environment:os.memory.free=500MB (org.apache.zookeeper.ZooKeeper) 17:06:43 kafka | [2025-06-10 17:04:04,057] INFO Client environment:os.memory.max=8044MB (org.apache.zookeeper.ZooKeeper) 17:06:43 kafka | [2025-06-10 17:04:04,057] INFO Client environment:os.memory.total=512MB (org.apache.zookeeper.ZooKeeper) 17:06:43 kafka | [2025-06-10 17:04:04,059] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@43a25848 (org.apache.zookeeper.ZooKeeper) 17:06:43 kafka | [2025-06-10 17:04:04,063] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 17:06:43 kafka | [2025-06-10 17:04:04,068] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) 17:06:43 kafka | [2025-06-10 17:04:04,074] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 17:06:43 kafka | [2025-06-10 17:04:04,088] INFO Opening socket connection to server zookeeper/172.17.0.3:2181. (org.apache.zookeeper.ClientCnxn) 17:06:43 kafka | [2025-06-10 17:04:04,088] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) 17:06:43 kafka | [2025-06-10 17:04:04,095] INFO Socket connection established, initiating session, client: /172.17.0.7:43654, server: zookeeper/172.17.0.3:2181 (org.apache.zookeeper.ClientCnxn) 17:06:43 kafka | [2025-06-10 17:04:04,119] INFO Session establishment complete on server zookeeper/172.17.0.3:2181, session id = 0x1000002c88c0000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) 17:06:43 kafka | [2025-06-10 17:04:04,236] INFO Session: 0x1000002c88c0000 closed (org.apache.zookeeper.ZooKeeper) 17:06:43 kafka | [2025-06-10 17:04:04,236] INFO EventThread shut down for session: 0x1000002c88c0000 (org.apache.zookeeper.ClientCnxn) 17:06:43 kafka | Using log4j config /etc/kafka/log4j.properties 17:06:43 kafka | ===> Launching ... 17:06:43 kafka | ===> Launching kafka ... 17:06:43 kafka | [2025-06-10 17:04:04,847] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) 17:06:43 kafka | [2025-06-10 17:04:05,053] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 17:06:43 kafka | [2025-06-10 17:04:05,133] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) 17:06:43 kafka | [2025-06-10 17:04:05,135] INFO starting (kafka.server.KafkaServer) 17:06:43 kafka | [2025-06-10 17:04:05,135] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) 17:06:43 kafka | [2025-06-10 17:04:05,155] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) 17:06:43 kafka | [2025-06-10 17:04:05,159] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) 17:06:43 kafka | [2025-06-10 17:04:05,159] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) 17:06:43 kafka | [2025-06-10 17:04:05,159] INFO Client environment:java.version=17.0.14 (org.apache.zookeeper.ZooKeeper) 17:06:43 kafka | [2025-06-10 17:04:05,159] INFO Client environment:java.vendor=Eclipse Adoptium (org.apache.zookeeper.ZooKeeper) 17:06:43 kafka | [2025-06-10 17:04:05,159] INFO Client environment:java.home=/usr/lib/jvm/temurin-17-jre (org.apache.zookeeper.ZooKeeper) 17:06:43 kafka | [2025-06-10 17:04:05,159] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/kafka-streams-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-transaction-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/connect-transforms-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-clients-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/connect-mirror-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.16.2.jar:/usr/bin/../share/java/kafka/scala-library-2.13.15.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.118.Final.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.16.2.jar:/usr/bin/../share/java/kafka/connect-runtime-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-afterburner-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/protobuf-java-3.25.5.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/maven-artifact-3.9.6.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/trogdor-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-server-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.15.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/commons-lang3-3.12.0.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.118.Final.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.6-4.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-tools-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jackson-databind-2.16.2.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/opentelemetry-proto-1.0.0-alpha.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-json-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.16.2.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/kafka-raft-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/plexus-utils-3.5.1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/commons-io-2.14.0.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.16.2.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/jackson-core-2.16.2.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.118.Final.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-storage-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.57.v20241219.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) 17:06:43 kafka | [2025-06-10 17:04:05,159] INFO Client environment:java.library.path=/usr/local/lib64:/usr/local/lib::/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 17:06:43 kafka | [2025-06-10 17:04:05,159] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 17:06:43 kafka | [2025-06-10 17:04:05,159] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 17:06:43 kafka | [2025-06-10 17:04:05,159] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 17:06:43 kafka | [2025-06-10 17:04:05,159] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 17:06:43 kafka | [2025-06-10 17:04:05,159] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 17:06:43 kafka | [2025-06-10 17:04:05,159] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 17:06:43 kafka | [2025-06-10 17:04:05,159] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 17:06:43 kafka | [2025-06-10 17:04:05,159] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 17:06:43 kafka | [2025-06-10 17:04:05,159] INFO Client environment:os.memory.free=988MB (org.apache.zookeeper.ZooKeeper) 17:06:43 kafka | [2025-06-10 17:04:05,159] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) 17:06:43 kafka | [2025-06-10 17:04:05,159] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) 17:06:43 kafka | [2025-06-10 17:04:05,161] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@22f59fa (org.apache.zookeeper.ZooKeeper) 17:06:43 kafka | [2025-06-10 17:04:05,165] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) 17:06:43 kafka | [2025-06-10 17:04:05,170] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 17:06:43 kafka | [2025-06-10 17:04:05,171] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) 17:06:43 kafka | [2025-06-10 17:04:05,174] INFO Opening socket connection to server zookeeper/172.17.0.3:2181. (org.apache.zookeeper.ClientCnxn) 17:06:43 kafka | [2025-06-10 17:04:05,178] INFO Socket connection established, initiating session, client: /172.17.0.7:43656, server: zookeeper/172.17.0.3:2181 (org.apache.zookeeper.ClientCnxn) 17:06:43 kafka | [2025-06-10 17:04:05,196] INFO Session establishment complete on server zookeeper/172.17.0.3:2181, session id = 0x1000002c88c0001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) 17:06:43 kafka | [2025-06-10 17:04:05,207] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) 17:06:43 kafka | [2025-06-10 17:04:05,591] INFO Cluster ID = oo1sME1NQySYCt7KlFcuVQ (kafka.server.KafkaServer) 17:06:43 kafka | [2025-06-10 17:04:05,649] INFO KafkaConfig values: 17:06:43 kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 17:06:43 kafka | alter.config.policy.class.name = null 17:06:43 kafka | alter.log.dirs.replication.quota.window.num = 11 17:06:43 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 17:06:43 kafka | authorizer.class.name = 17:06:43 kafka | auto.create.topics.enable = true 17:06:43 kafka | auto.include.jmx.reporter = true 17:06:43 kafka | auto.leader.rebalance.enable = true 17:06:43 kafka | background.threads = 10 17:06:43 kafka | broker.heartbeat.interval.ms = 2000 17:06:43 kafka | broker.id = 1 17:06:43 kafka | broker.id.generation.enable = true 17:06:43 kafka | broker.rack = null 17:06:43 kafka | broker.session.timeout.ms = 9000 17:06:43 kafka | client.quota.callback.class = null 17:06:43 kafka | compression.gzip.level = -1 17:06:43 kafka | compression.lz4.level = 9 17:06:43 kafka | compression.type = producer 17:06:43 kafka | compression.zstd.level = 3 17:06:43 kafka | connection.failed.authentication.delay.ms = 100 17:06:43 kafka | connections.max.idle.ms = 600000 17:06:43 kafka | connections.max.reauth.ms = 0 17:06:43 kafka | control.plane.listener.name = null 17:06:43 kafka | controlled.shutdown.enable = true 17:06:43 kafka | controlled.shutdown.max.retries = 3 17:06:43 kafka | controlled.shutdown.retry.backoff.ms = 5000 17:06:43 kafka | controller.listener.names = null 17:06:43 kafka | controller.quorum.append.linger.ms = 25 17:06:43 kafka | controller.quorum.bootstrap.servers = [] 17:06:43 kafka | controller.quorum.election.backoff.max.ms = 1000 17:06:43 kafka | controller.quorum.election.timeout.ms = 1000 17:06:43 kafka | controller.quorum.fetch.timeout.ms = 2000 17:06:43 kafka | controller.quorum.request.timeout.ms = 2000 17:06:43 kafka | controller.quorum.retry.backoff.ms = 20 17:06:43 kafka | controller.quorum.voters = [] 17:06:43 kafka | controller.quota.window.num = 11 17:06:43 kafka | controller.quota.window.size.seconds = 1 17:06:43 kafka | controller.socket.timeout.ms = 30000 17:06:43 kafka | create.topic.policy.class.name = null 17:06:43 kafka | default.replication.factor = 1 17:06:43 kafka | delegation.token.expiry.check.interval.ms = 3600000 17:06:43 kafka | delegation.token.expiry.time.ms = 86400000 17:06:43 kafka | delegation.token.master.key = null 17:06:43 kafka | delegation.token.max.lifetime.ms = 604800000 17:06:43 kafka | delegation.token.secret.key = null 17:06:43 kafka | delete.records.purgatory.purge.interval.requests = 1 17:06:43 kafka | delete.topic.enable = true 17:06:43 kafka | early.start.listeners = null 17:06:43 kafka | eligible.leader.replicas.enable = false 17:06:43 kafka | fetch.max.bytes = 57671680 17:06:43 kafka | fetch.purgatory.purge.interval.requests = 1000 17:06:43 kafka | group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.UniformAssignor, org.apache.kafka.coordinator.group.assignor.RangeAssignor] 17:06:43 kafka | group.consumer.heartbeat.interval.ms = 5000 17:06:43 kafka | group.consumer.max.heartbeat.interval.ms = 15000 17:06:43 kafka | group.consumer.max.session.timeout.ms = 60000 17:06:43 kafka | group.consumer.max.size = 2147483647 17:06:43 kafka | group.consumer.migration.policy = disabled 17:06:43 kafka | group.consumer.min.heartbeat.interval.ms = 5000 17:06:43 kafka | group.consumer.min.session.timeout.ms = 45000 17:06:43 kafka | group.consumer.session.timeout.ms = 45000 17:06:43 kafka | group.coordinator.append.linger.ms = 10 17:06:43 kafka | group.coordinator.new.enable = false 17:06:43 kafka | group.coordinator.rebalance.protocols = [classic] 17:06:43 kafka | group.coordinator.threads = 1 17:06:43 kafka | group.initial.rebalance.delay.ms = 3000 17:06:43 kafka | group.max.session.timeout.ms = 1800000 17:06:43 kafka | group.max.size = 2147483647 17:06:43 kafka | group.min.session.timeout.ms = 6000 17:06:43 kafka | group.share.delivery.count.limit = 5 17:06:43 kafka | group.share.enable = false 17:06:43 kafka | group.share.heartbeat.interval.ms = 5000 17:06:43 kafka | group.share.max.groups = 10 17:06:43 kafka | group.share.max.heartbeat.interval.ms = 15000 17:06:43 kafka | group.share.max.record.lock.duration.ms = 60000 17:06:43 kafka | group.share.max.session.timeout.ms = 60000 17:06:43 kafka | group.share.max.size = 200 17:06:43 kafka | group.share.min.heartbeat.interval.ms = 5000 17:06:43 kafka | group.share.min.record.lock.duration.ms = 15000 17:06:43 kafka | group.share.min.session.timeout.ms = 45000 17:06:43 kafka | group.share.partition.max.record.locks = 200 17:06:43 kafka | group.share.record.lock.duration.ms = 30000 17:06:43 kafka | group.share.session.timeout.ms = 45000 17:06:43 kafka | initial.broker.registration.timeout.ms = 60000 17:06:43 kafka | inter.broker.listener.name = PLAINTEXT 17:06:43 kafka | inter.broker.protocol.version = 3.9-IV0 17:06:43 kafka | kafka.metrics.polling.interval.secs = 10 17:06:43 kafka | kafka.metrics.reporters = [] 17:06:43 kafka | leader.imbalance.check.interval.seconds = 300 17:06:43 kafka | leader.imbalance.per.broker.percentage = 10 17:06:43 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT 17:06:43 kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 17:06:43 kafka | log.cleaner.backoff.ms = 15000 17:06:43 kafka | log.cleaner.dedupe.buffer.size = 134217728 17:06:43 kafka | log.cleaner.delete.retention.ms = 86400000 17:06:43 kafka | log.cleaner.enable = true 17:06:43 kafka | log.cleaner.io.buffer.load.factor = 0.9 17:06:43 kafka | log.cleaner.io.buffer.size = 524288 17:06:43 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 17:06:43 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 17:06:43 kafka | log.cleaner.min.cleanable.ratio = 0.5 17:06:43 kafka | log.cleaner.min.compaction.lag.ms = 0 17:06:43 kafka | log.cleaner.threads = 1 17:06:43 kafka | log.cleanup.policy = [delete] 17:06:43 kafka | log.dir = /tmp/kafka-logs 17:06:43 kafka | log.dir.failure.timeout.ms = 30000 17:06:43 kafka | log.dirs = /var/lib/kafka/data 17:06:43 kafka | log.flush.interval.messages = 9223372036854775807 17:06:43 kafka | log.flush.interval.ms = null 17:06:43 kafka | log.flush.offset.checkpoint.interval.ms = 60000 17:06:43 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 17:06:43 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 17:06:43 kafka | log.index.interval.bytes = 4096 17:06:43 kafka | log.index.size.max.bytes = 10485760 17:06:43 kafka | log.initial.task.delay.ms = 30000 17:06:43 kafka | log.local.retention.bytes = -2 17:06:43 kafka | log.local.retention.ms = -2 17:06:43 kafka | log.message.downconversion.enable = true 17:06:43 kafka | log.message.format.version = 3.0-IV1 17:06:43 kafka | log.message.timestamp.after.max.ms = 9223372036854775807 17:06:43 kafka | log.message.timestamp.before.max.ms = 9223372036854775807 17:06:43 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 17:06:43 kafka | log.message.timestamp.type = CreateTime 17:06:43 kafka | log.preallocate = false 17:06:43 kafka | log.retention.bytes = -1 17:06:43 kafka | log.retention.check.interval.ms = 300000 17:06:43 kafka | log.retention.hours = 168 17:06:43 kafka | log.retention.minutes = null 17:06:43 kafka | log.retention.ms = null 17:06:43 kafka | log.roll.hours = 168 17:06:43 kafka | log.roll.jitter.hours = 0 17:06:43 kafka | log.roll.jitter.ms = null 17:06:43 kafka | log.roll.ms = null 17:06:43 kafka | log.segment.bytes = 1073741824 17:06:43 kafka | log.segment.delete.delay.ms = 60000 17:06:43 kafka | max.connection.creation.rate = 2147483647 17:06:43 kafka | max.connections = 2147483647 17:06:43 kafka | max.connections.per.ip = 2147483647 17:06:43 kafka | max.connections.per.ip.overrides = 17:06:43 kafka | max.incremental.fetch.session.cache.slots = 1000 17:06:43 kafka | max.request.partition.size.limit = 2000 17:06:43 kafka | message.max.bytes = 1048588 17:06:43 kafka | metadata.log.dir = null 17:06:43 kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 17:06:43 kafka | metadata.log.max.snapshot.interval.ms = 3600000 17:06:43 kafka | metadata.log.segment.bytes = 1073741824 17:06:43 kafka | metadata.log.segment.min.bytes = 8388608 17:06:43 kafka | metadata.log.segment.ms = 604800000 17:06:43 kafka | metadata.max.idle.interval.ms = 500 17:06:43 kafka | metadata.max.retention.bytes = 104857600 17:06:43 kafka | metadata.max.retention.ms = 604800000 17:06:43 kafka | metric.reporters = [] 17:06:43 kafka | metrics.num.samples = 2 17:06:43 kafka | metrics.recording.level = INFO 17:06:43 kafka | metrics.sample.window.ms = 30000 17:06:43 kafka | min.insync.replicas = 1 17:06:43 kafka | node.id = 1 17:06:43 kafka | num.io.threads = 8 17:06:43 kafka | num.network.threads = 3 17:06:43 kafka | num.partitions = 1 17:06:43 kafka | num.recovery.threads.per.data.dir = 1 17:06:43 kafka | num.replica.alter.log.dirs.threads = null 17:06:43 kafka | num.replica.fetchers = 1 17:06:43 kafka | offset.metadata.max.bytes = 4096 17:06:43 kafka | offsets.commit.required.acks = -1 17:06:43 kafka | offsets.commit.timeout.ms = 5000 17:06:43 kafka | offsets.load.buffer.size = 5242880 17:06:43 kafka | offsets.retention.check.interval.ms = 600000 17:06:43 kafka | offsets.retention.minutes = 10080 17:06:43 kafka | offsets.topic.compression.codec = 0 17:06:43 kafka | offsets.topic.num.partitions = 50 17:06:43 kafka | offsets.topic.replication.factor = 1 17:06:43 kafka | offsets.topic.segment.bytes = 104857600 17:06:43 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding 17:06:43 kafka | password.encoder.iterations = 4096 17:06:43 kafka | password.encoder.key.length = 128 17:06:43 kafka | password.encoder.keyfactory.algorithm = null 17:06:43 kafka | password.encoder.old.secret = null 17:06:43 kafka | password.encoder.secret = null 17:06:43 kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder 17:06:43 kafka | process.roles = [] 17:06:43 kafka | producer.id.expiration.check.interval.ms = 600000 17:06:43 kafka | producer.id.expiration.ms = 86400000 17:06:43 kafka | producer.purgatory.purge.interval.requests = 1000 17:06:43 kafka | queued.max.request.bytes = -1 17:06:43 kafka | queued.max.requests = 500 17:06:43 kafka | quota.window.num = 11 17:06:43 kafka | quota.window.size.seconds = 1 17:06:43 kafka | remote.fetch.max.wait.ms = 500 17:06:43 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 17:06:43 kafka | remote.log.manager.copier.thread.pool.size = -1 17:06:43 kafka | remote.log.manager.copy.max.bytes.per.second = 9223372036854775807 17:06:43 kafka | remote.log.manager.copy.quota.window.num = 11 17:06:43 kafka | remote.log.manager.copy.quota.window.size.seconds = 1 17:06:43 kafka | remote.log.manager.expiration.thread.pool.size = -1 17:06:43 kafka | remote.log.manager.fetch.max.bytes.per.second = 9223372036854775807 17:06:43 kafka | remote.log.manager.fetch.quota.window.num = 11 17:06:43 kafka | remote.log.manager.fetch.quota.window.size.seconds = 1 17:06:43 kafka | remote.log.manager.task.interval.ms = 30000 17:06:43 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 17:06:43 kafka | remote.log.manager.task.retry.backoff.ms = 500 17:06:43 kafka | remote.log.manager.task.retry.jitter = 0.2 17:06:43 kafka | remote.log.manager.thread.pool.size = 10 17:06:43 kafka | remote.log.metadata.custom.metadata.max.bytes = 128 17:06:43 kafka | remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager 17:06:43 kafka | remote.log.metadata.manager.class.path = null 17:06:43 kafka | remote.log.metadata.manager.impl.prefix = rlmm.config. 17:06:43 kafka | remote.log.metadata.manager.listener.name = null 17:06:43 kafka | remote.log.reader.max.pending.tasks = 100 17:06:43 kafka | remote.log.reader.threads = 10 17:06:43 kafka | remote.log.storage.manager.class.name = null 17:06:43 kafka | remote.log.storage.manager.class.path = null 17:06:43 kafka | remote.log.storage.manager.impl.prefix = rsm.config. 17:06:43 kafka | remote.log.storage.system.enable = false 17:06:43 kafka | replica.fetch.backoff.ms = 1000 17:06:43 kafka | replica.fetch.max.bytes = 1048576 17:06:43 kafka | replica.fetch.min.bytes = 1 17:06:43 kafka | replica.fetch.response.max.bytes = 10485760 17:06:43 kafka | replica.fetch.wait.max.ms = 500 17:06:43 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 17:06:43 kafka | replica.lag.time.max.ms = 30000 17:06:43 kafka | replica.selector.class = null 17:06:43 kafka | replica.socket.receive.buffer.bytes = 65536 17:06:43 kafka | replica.socket.timeout.ms = 30000 17:06:43 kafka | replication.quota.window.num = 11 17:06:43 kafka | replication.quota.window.size.seconds = 1 17:06:43 kafka | request.timeout.ms = 30000 17:06:43 kafka | reserved.broker.max.id = 1000 17:06:43 kafka | sasl.client.callback.handler.class = null 17:06:43 kafka | sasl.enabled.mechanisms = [GSSAPI] 17:06:43 kafka | sasl.jaas.config = null 17:06:43 kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit 17:06:43 kafka | sasl.kerberos.min.time.before.relogin = 60000 17:06:43 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] 17:06:43 kafka | sasl.kerberos.service.name = null 17:06:43 kafka | sasl.kerberos.ticket.renew.jitter = 0.05 17:06:43 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 17:06:43 kafka | sasl.login.callback.handler.class = null 17:06:43 kafka | sasl.login.class = null 17:06:43 kafka | sasl.login.connect.timeout.ms = null 17:06:43 kafka | sasl.login.read.timeout.ms = null 17:06:43 kafka | sasl.login.refresh.buffer.seconds = 300 17:06:43 kafka | sasl.login.refresh.min.period.seconds = 60 17:06:43 kafka | sasl.login.refresh.window.factor = 0.8 17:06:43 kafka | sasl.login.refresh.window.jitter = 0.05 17:06:43 kafka | sasl.login.retry.backoff.max.ms = 10000 17:06:43 kafka | sasl.login.retry.backoff.ms = 100 17:06:43 kafka | sasl.mechanism.controller.protocol = GSSAPI 17:06:43 kafka | sasl.mechanism.inter.broker.protocol = GSSAPI 17:06:43 kafka | sasl.oauthbearer.clock.skew.seconds = 30 17:06:43 kafka | sasl.oauthbearer.expected.audience = null 17:06:43 kafka | sasl.oauthbearer.expected.issuer = null 17:06:43 kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 17:06:43 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 17:06:43 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 17:06:43 kafka | sasl.oauthbearer.jwks.endpoint.url = null 17:06:43 kafka | sasl.oauthbearer.scope.claim.name = scope 17:06:43 kafka | sasl.oauthbearer.sub.claim.name = sub 17:06:43 kafka | sasl.oauthbearer.token.endpoint.url = null 17:06:43 kafka | sasl.server.callback.handler.class = null 17:06:43 kafka | sasl.server.max.receive.size = 524288 17:06:43 kafka | security.inter.broker.protocol = PLAINTEXT 17:06:43 kafka | security.providers = null 17:06:43 kafka | server.max.startup.time.ms = 9223372036854775807 17:06:43 kafka | socket.connection.setup.timeout.max.ms = 30000 17:06:43 kafka | socket.connection.setup.timeout.ms = 10000 17:06:43 kafka | socket.listen.backlog.size = 50 17:06:43 kafka | socket.receive.buffer.bytes = 102400 17:06:43 kafka | socket.request.max.bytes = 104857600 17:06:43 kafka | socket.send.buffer.bytes = 102400 17:06:43 kafka | ssl.allow.dn.changes = false 17:06:43 kafka | ssl.allow.san.changes = false 17:06:43 kafka | ssl.cipher.suites = [] 17:06:43 kafka | ssl.client.auth = none 17:06:43 kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 17:06:43 kafka | ssl.endpoint.identification.algorithm = https 17:06:43 kafka | ssl.engine.factory.class = null 17:06:43 kafka | ssl.key.password = null 17:06:43 kafka | ssl.keymanager.algorithm = SunX509 17:06:43 kafka | ssl.keystore.certificate.chain = null 17:06:43 kafka | ssl.keystore.key = null 17:06:43 kafka | ssl.keystore.location = null 17:06:43 kafka | ssl.keystore.password = null 17:06:43 kafka | ssl.keystore.type = JKS 17:06:43 kafka | ssl.principal.mapping.rules = DEFAULT 17:06:43 kafka | ssl.protocol = TLSv1.3 17:06:43 kafka | ssl.provider = null 17:06:43 kafka | ssl.secure.random.implementation = null 17:06:43 kafka | ssl.trustmanager.algorithm = PKIX 17:06:43 kafka | ssl.truststore.certificates = null 17:06:43 kafka | ssl.truststore.location = null 17:06:43 kafka | ssl.truststore.password = null 17:06:43 kafka | ssl.truststore.type = JKS 17:06:43 kafka | telemetry.max.bytes = 1048576 17:06:43 kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 17:06:43 kafka | transaction.max.timeout.ms = 900000 17:06:43 kafka | transaction.partition.verification.enable = true 17:06:43 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 17:06:43 kafka | transaction.state.log.load.buffer.size = 5242880 17:06:43 kafka | transaction.state.log.min.isr = 2 17:06:43 kafka | transaction.state.log.num.partitions = 50 17:06:43 kafka | transaction.state.log.replication.factor = 3 17:06:43 kafka | transaction.state.log.segment.bytes = 104857600 17:06:43 kafka | transactional.id.expiration.ms = 604800000 17:06:43 kafka | unclean.leader.election.enable = false 17:06:43 kafka | unclean.leader.election.interval.ms = 300000 17:06:43 kafka | unstable.api.versions.enable = false 17:06:43 kafka | unstable.feature.versions.enable = false 17:06:43 kafka | zookeeper.clientCnxnSocket = null 17:06:43 kafka | zookeeper.connect = zookeeper:2181 17:06:43 kafka | zookeeper.connection.timeout.ms = null 17:06:43 kafka | zookeeper.max.in.flight.requests = 10 17:06:43 kafka | zookeeper.metadata.migration.enable = false 17:06:43 kafka | zookeeper.metadata.migration.min.batch.size = 200 17:06:43 kafka | zookeeper.session.timeout.ms = 18000 17:06:43 kafka | zookeeper.set.acl = false 17:06:43 kafka | zookeeper.ssl.cipher.suites = null 17:06:43 kafka | zookeeper.ssl.client.enable = false 17:06:43 kafka | zookeeper.ssl.crl.enable = false 17:06:43 kafka | zookeeper.ssl.enabled.protocols = null 17:06:43 kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS 17:06:43 kafka | zookeeper.ssl.keystore.location = null 17:06:43 kafka | zookeeper.ssl.keystore.password = null 17:06:43 kafka | zookeeper.ssl.keystore.type = null 17:06:43 kafka | zookeeper.ssl.ocsp.enable = false 17:06:43 kafka | zookeeper.ssl.protocol = TLSv1.2 17:06:43 kafka | zookeeper.ssl.truststore.location = null 17:06:43 kafka | zookeeper.ssl.truststore.password = null 17:06:43 kafka | zookeeper.ssl.truststore.type = null 17:06:43 kafka | (kafka.server.KafkaConfig) 17:06:43 kafka | [2025-06-10 17:04:05,677] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 17:06:43 kafka | [2025-06-10 17:04:05,677] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 17:06:43 kafka | [2025-06-10 17:04:05,678] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 17:06:43 kafka | [2025-06-10 17:04:05,681] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 17:06:43 kafka | [2025-06-10 17:04:05,687] INFO [KafkaServer id=1] Rewriting /var/lib/kafka/data/meta.properties (kafka.server.KafkaServer) 17:06:43 kafka | [2025-06-10 17:04:05,747] INFO Loading logs from log dirs ArrayBuffer(/var/lib/kafka/data) (kafka.log.LogManager) 17:06:43 kafka | [2025-06-10 17:04:05,751] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) 17:06:43 kafka | [2025-06-10 17:04:05,764] INFO Loaded 0 logs in 16ms (kafka.log.LogManager) 17:06:43 kafka | [2025-06-10 17:04:05,765] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) 17:06:43 kafka | [2025-06-10 17:04:05,767] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) 17:06:43 kafka | [2025-06-10 17:04:05,788] INFO Starting the log cleaner (kafka.log.LogCleaner) 17:06:43 kafka | [2025-06-10 17:04:05,835] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) 17:06:43 kafka | [2025-06-10 17:04:05,847] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) 17:06:43 kafka | [2025-06-10 17:04:05,859] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) 17:06:43 kafka | [2025-06-10 17:04:05,900] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.NodeToControllerRequestThread) 17:06:43 kafka | [2025-06-10 17:04:06,187] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 17:06:43 kafka | [2025-06-10 17:04:06,202] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) 17:06:43 kafka | [2025-06-10 17:04:06,202] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 17:06:43 kafka | [2025-06-10 17:04:06,206] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) 17:06:43 kafka | [2025-06-10 17:04:06,209] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.NodeToControllerRequestThread) 17:06:43 kafka | [2025-06-10 17:04:06,228] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 17:06:43 kafka | [2025-06-10 17:04:06,231] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 17:06:43 kafka | [2025-06-10 17:04:06,232] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 17:06:43 kafka | [2025-06-10 17:04:06,236] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 17:06:43 kafka | [2025-06-10 17:04:06,237] INFO [ExpirationReaper-1-RemoteFetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 17:06:43 kafka | [2025-06-10 17:04:06,252] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) 17:06:43 kafka | [2025-06-10 17:04:06,256] INFO [AddPartitionsToTxnSenderThread-1]: Starting (kafka.server.AddPartitionsToTxnManager) 17:06:43 kafka | [2025-06-10 17:04:06,280] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) 17:06:43 kafka | [2025-06-10 17:04:06,306] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1749575046294,1749575046294,1,0,0,72057605992480769,258,0,27 17:06:43 kafka | (kafka.zk.KafkaZkClient) 17:06:43 kafka | [2025-06-10 17:04:06,307] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) 17:06:43 kafka | [2025-06-10 17:04:06,341] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) 17:06:43 kafka | [2025-06-10 17:04:06,345] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 17:06:43 kafka | [2025-06-10 17:04:06,353] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 17:06:43 kafka | [2025-06-10 17:04:06,355] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 17:06:43 kafka | [2025-06-10 17:04:06,364] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) 17:06:43 kafka | [2025-06-10 17:04:06,366] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) 17:06:43 kafka | [2025-06-10 17:04:06,370] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) 17:06:43 kafka | [2025-06-10 17:04:06,382] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) 17:06:43 kafka | [2025-06-10 17:04:06,397] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) 17:06:43 kafka | [2025-06-10 17:04:06,397] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) 17:06:43 kafka | [2025-06-10 17:04:06,397] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) 17:06:43 kafka | [2025-06-10 17:04:06,403] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) 17:06:43 kafka | [2025-06-10 17:04:06,408] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) 17:06:43 kafka | [2025-06-10 17:04:06,434] INFO [MetadataCache brokerId=1] Updated cache from existing None to latest Features(metadataVersion=3.9-IV0, finalizedFeatures={}, finalizedFeaturesEpoch=0). (kafka.server.metadata.ZkMetadataCache) 17:06:43 kafka | [2025-06-10 17:04:06,434] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) 17:06:43 kafka | [2025-06-10 17:04:06,452] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) 17:06:43 kafka | [2025-06-10 17:04:06,479] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) 17:06:43 kafka | [2025-06-10 17:04:06,480] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 17:06:43 kafka | [2025-06-10 17:04:06,485] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) 17:06:43 kafka | [2025-06-10 17:04:06,511] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) 17:06:43 kafka | [2025-06-10 17:04:06,520] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) 17:06:43 kafka | [2025-06-10 17:04:06,534] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) 17:06:43 kafka | [2025-06-10 17:04:06,544] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) 17:06:43 kafka | [2025-06-10 17:04:06,544] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) 17:06:43 kafka | [2025-06-10 17:04:06,545] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) 17:06:43 kafka | [2025-06-10 17:04:06,545] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) 17:06:43 kafka | [2025-06-10 17:04:06,545] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) 17:06:43 kafka | [2025-06-10 17:04:06,547] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) 17:06:43 kafka | [2025-06-10 17:04:06,549] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) 17:06:43 kafka | [2025-06-10 17:04:06,550] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) 17:06:43 kafka | [2025-06-10 17:04:06,550] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) 17:06:43 kafka | [2025-06-10 17:04:06,551] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) 17:06:43 kafka | [2025-06-10 17:04:06,553] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) 17:06:43 kafka | [2025-06-10 17:04:06,556] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:06,557] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) 17:06:43 kafka | [2025-06-10 17:04:06,562] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) 17:06:43 kafka | [2025-06-10 17:04:06,570] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) 17:06:43 kafka | [2025-06-10 17:04:06,571] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) 17:06:43 kafka | [2025-06-10 17:04:06,572] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) 17:06:43 kafka | [2025-06-10 17:04:06,575] INFO [Controller id=1, targetBrokerId=1] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) 17:06:43 kafka | [2025-06-10 17:04:06,575] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) 17:06:43 kafka | [2025-06-10 17:04:06,575] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) 17:06:43 kafka | [2025-06-10 17:04:06,576] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) 17:06:43 kafka | [2025-06-10 17:04:06,576] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) 17:06:43 kafka | [2025-06-10 17:04:06,581] WARN [Controller id=1, targetBrokerId=1] Connection to node 1 (kafka/172.17.0.7:9092) could not be established. Node may not be available. (org.apache.kafka.clients.NetworkClient) 17:06:43 kafka | [2025-06-10 17:04:06,582] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) 17:06:43 kafka | [2025-06-10 17:04:06,583] WARN [RequestSendThread controllerId=1] Controller 1's connection to broker kafka:9092 (id: 1 rack: null) was unsuccessful (kafka.controller.RequestSendThread) 17:06:43 kafka | java.io.IOException: Connection to kafka:9092 (id: 1 rack: null) failed. 17:06:43 kafka | at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:71) 17:06:43 kafka | at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:299) 17:06:43 kafka | at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:252) 17:06:43 kafka | at org.apache.kafka.server.util.ShutdownableThread.run(ShutdownableThread.java:136) 17:06:43 kafka | [2025-06-10 17:04:06,583] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) 17:06:43 kafka | [2025-06-10 17:04:06,585] INFO [Controller id=1, targetBrokerId=1] Client requested connection close from node 1 (org.apache.kafka.clients.NetworkClient) 17:06:43 kafka | [2025-06-10 17:04:06,588] INFO [KafkaServer id=1] Start processing authorizer futures (kafka.server.KafkaServer) 17:06:43 kafka | [2025-06-10 17:04:06,588] INFO [KafkaServer id=1] End processing authorizer futures (kafka.server.KafkaServer) 17:06:43 kafka | [2025-06-10 17:04:06,589] INFO [KafkaServer id=1] Start processing enable request processing future (kafka.server.KafkaServer) 17:06:43 kafka | [2025-06-10 17:04:06,589] INFO [KafkaServer id=1] End processing enable request processing future (kafka.server.KafkaServer) 17:06:43 kafka | [2025-06-10 17:04:06,594] INFO Kafka version: 7.9.1-ccs (org.apache.kafka.common.utils.AppInfoParser) 17:06:43 kafka | [2025-06-10 17:04:06,594] INFO Kafka commitId: 9ee7460b50277c7131a7a2ea9587efdbd12ef30e (org.apache.kafka.common.utils.AppInfoParser) 17:06:43 kafka | [2025-06-10 17:04:06,594] INFO Kafka startTimeMs: 1749575046589 (org.apache.kafka.common.utils.AppInfoParser) 17:06:43 kafka | [2025-06-10 17:04:06,596] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) 17:06:43 kafka | [2025-06-10 17:04:06,597] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) 17:06:43 kafka | [2025-06-10 17:04:06,598] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) 17:06:43 kafka | [2025-06-10 17:04:06,598] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) 17:06:43 kafka | [2025-06-10 17:04:06,598] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) 17:06:43 kafka | [2025-06-10 17:04:06,599] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) 17:06:43 kafka | [2025-06-10 17:04:06,610] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) 17:06:43 kafka | [2025-06-10 17:04:06,690] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) 17:06:43 kafka | [2025-06-10 17:04:06,769] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:06,812] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new ZK controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.NodeToControllerRequestThread) 17:06:43 kafka | [2025-06-10 17:04:06,823] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new ZK controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.NodeToControllerRequestThread) 17:06:43 kafka | [2025-06-10 17:04:11,612] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) 17:06:43 kafka | [2025-06-10 17:04:11,612] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) 17:06:43 kafka | [2025-06-10 17:04:38,849] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 17:06:43 kafka | [2025-06-10 17:04:38,851] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) 17:06:43 kafka | [2025-06-10 17:04:38,852] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 17:06:43 kafka | [2025-06-10 17:04:38,858] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) 17:06:43 kafka | [2025-06-10 17:04:38,910] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(EeyNWRw_RqeZh8XoyQ_0yg),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(dJiK8vdGRrOzlCKdlIRQ6w),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 17:06:43 kafka | [2025-06-10 17:04:38,914] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) 17:06:43 kafka | [2025-06-10 17:04:38,917] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,917] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,917] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,917] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,917] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,917] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,917] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,917] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,917] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,917] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,917] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,917] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,917] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,918] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,918] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,918] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,918] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,918] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,918] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,918] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,918] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,918] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,918] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,918] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,918] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,918] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,918] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,918] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,918] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,918] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,918] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,918] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,918] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,918] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,918] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,918] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,919] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,919] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,919] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,919] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,919] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,919] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,919] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,919] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,919] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,919] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,919] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,920] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,920] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,920] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,920] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,920] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,930] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,931] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,931] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,931] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,931] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,933] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,933] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,936] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,936] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,936] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,937] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,937] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,937] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,937] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,937] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,937] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,937] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,937] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,938] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,938] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,938] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,938] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,938] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,938] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,938] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,939] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,939] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,939] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,939] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,939] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,939] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,939] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,939] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,940] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,940] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,940] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,940] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,940] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,940] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,940] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,941] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,941] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,941] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,941] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,941] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,941] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,941] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,941] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,941] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,941] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,942] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:38,942] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,113] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,113] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,113] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,114] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,114] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,114] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,114] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,114] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,114] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,114] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,114] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,114] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,114] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,114] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,115] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,115] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,115] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,115] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,115] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,115] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,115] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,115] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,115] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,115] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,116] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,116] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,116] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,116] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,116] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,116] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,116] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,116] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,116] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,116] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,116] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,116] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,117] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,117] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,117] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,117] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,117] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,117] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,117] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,117] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,117] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,117] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,117] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,117] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,117] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,118] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,118] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,120] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,120] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,120] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,120] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,120] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,120] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,120] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,121] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,121] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,121] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,121] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,121] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,121] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,122] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,122] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,122] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,122] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,122] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,122] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,122] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,123] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,123] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,123] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,123] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,123] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,123] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,124] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,124] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,124] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,124] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,124] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,124] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,124] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,125] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,125] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,125] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,125] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,125] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,125] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,126] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,126] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,126] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,126] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,126] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,126] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,126] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,127] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,127] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,127] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,127] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,127] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,128] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,130] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,131] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,131] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,132] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,132] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,132] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,132] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,132] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,132] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,132] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,132] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,133] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,133] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,133] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,133] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,133] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,133] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,133] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,134] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,134] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,134] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,134] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,134] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,134] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,134] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,135] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,135] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,135] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,135] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,135] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,135] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,136] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,136] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,136] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,136] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,136] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,137] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,137] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,137] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,137] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,137] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,137] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,138] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,138] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,139] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,139] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,139] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,139] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,139] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,139] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,139] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,140] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,140] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,142] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,143] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,143] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,143] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,143] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,143] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,144] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,144] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,144] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,144] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,144] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,144] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,144] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,144] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,144] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,144] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,144] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,144] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,144] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,144] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,145] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,145] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,145] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,145] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,145] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,145] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,145] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,145] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,145] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,145] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,145] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,145] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,145] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,145] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,146] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,146] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,146] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,146] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,146] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,146] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,146] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,146] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,146] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,146] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,146] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,146] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,146] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,146] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,147] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,147] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,147] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,147] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,178] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,178] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,178] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,178] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,178] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,178] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,178] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,178] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,178] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,178] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,179] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,179] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,179] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,179] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,179] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,179] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,179] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,179] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,179] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,179] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,179] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,179] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,179] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,179] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,179] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,179] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,180] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,180] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,180] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,180] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,180] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,180] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,180] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,180] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,180] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,180] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,180] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,180] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,180] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,180] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,180] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,181] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,181] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,181] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,181] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,181] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,181] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,181] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,181] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,181] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,181] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,183] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) 17:06:43 kafka | [2025-06-10 17:04:39,183] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,223] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:43 kafka | [2025-06-10 17:04:39,230] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:43 kafka | [2025-06-10 17:04:39,231] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,232] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,233] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(dJiK8vdGRrOzlCKdlIRQ6w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,256] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:43 kafka | [2025-06-10 17:04:39,258] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:43 kafka | [2025-06-10 17:04:39,259] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,261] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,261] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(dJiK8vdGRrOzlCKdlIRQ6w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,270] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:43 kafka | [2025-06-10 17:04:39,270] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:43 kafka | [2025-06-10 17:04:39,270] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,270] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,271] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(dJiK8vdGRrOzlCKdlIRQ6w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,283] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:43 kafka | [2025-06-10 17:04:39,284] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:43 kafka | [2025-06-10 17:04:39,284] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,284] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,285] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(dJiK8vdGRrOzlCKdlIRQ6w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,297] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:43 kafka | [2025-06-10 17:04:39,298] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:43 kafka | [2025-06-10 17:04:39,298] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,298] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,298] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(dJiK8vdGRrOzlCKdlIRQ6w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,310] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:43 kafka | [2025-06-10 17:04:39,311] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:43 kafka | [2025-06-10 17:04:39,311] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,311] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,311] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(dJiK8vdGRrOzlCKdlIRQ6w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,322] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:43 kafka | [2025-06-10 17:04:39,323] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:43 kafka | [2025-06-10 17:04:39,323] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,323] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,323] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(dJiK8vdGRrOzlCKdlIRQ6w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,334] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:43 kafka | [2025-06-10 17:04:39,335] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:43 kafka | [2025-06-10 17:04:39,335] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,335] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,335] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(dJiK8vdGRrOzlCKdlIRQ6w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,347] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:43 kafka | [2025-06-10 17:04:39,348] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:43 kafka | [2025-06-10 17:04:39,349] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,349] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,349] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(dJiK8vdGRrOzlCKdlIRQ6w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,368] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:43 kafka | [2025-06-10 17:04:39,370] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:43 kafka | [2025-06-10 17:04:39,370] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,370] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,371] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(dJiK8vdGRrOzlCKdlIRQ6w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,378] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:43 kafka | [2025-06-10 17:04:39,379] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:43 kafka | [2025-06-10 17:04:39,379] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,379] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,379] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(dJiK8vdGRrOzlCKdlIRQ6w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,388] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:43 kafka | [2025-06-10 17:04:39,389] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:43 kafka | [2025-06-10 17:04:39,389] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,389] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,390] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(dJiK8vdGRrOzlCKdlIRQ6w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,400] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:43 kafka | [2025-06-10 17:04:39,401] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:43 kafka | [2025-06-10 17:04:39,401] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,401] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,402] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(dJiK8vdGRrOzlCKdlIRQ6w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,409] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:43 kafka | [2025-06-10 17:04:39,410] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:43 kafka | [2025-06-10 17:04:39,410] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,410] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,410] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(dJiK8vdGRrOzlCKdlIRQ6w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,418] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:43 kafka | [2025-06-10 17:04:39,418] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:43 kafka | [2025-06-10 17:04:39,418] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,418] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,418] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(dJiK8vdGRrOzlCKdlIRQ6w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,425] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:43 kafka | [2025-06-10 17:04:39,426] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:43 kafka | [2025-06-10 17:04:39,426] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,426] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,426] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(dJiK8vdGRrOzlCKdlIRQ6w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,436] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:43 kafka | [2025-06-10 17:04:39,437] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:43 kafka | [2025-06-10 17:04:39,437] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,437] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,437] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(dJiK8vdGRrOzlCKdlIRQ6w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,445] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:43 kafka | [2025-06-10 17:04:39,446] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:43 kafka | [2025-06-10 17:04:39,446] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,446] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,446] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(dJiK8vdGRrOzlCKdlIRQ6w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,455] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:43 kafka | [2025-06-10 17:04:39,455] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:43 kafka | [2025-06-10 17:04:39,455] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,455] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,456] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(dJiK8vdGRrOzlCKdlIRQ6w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,462] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:43 kafka | [2025-06-10 17:04:39,463] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:43 kafka | [2025-06-10 17:04:39,463] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,463] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,464] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(dJiK8vdGRrOzlCKdlIRQ6w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,477] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:43 kafka | [2025-06-10 17:04:39,478] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:43 kafka | [2025-06-10 17:04:39,478] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,478] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,479] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(dJiK8vdGRrOzlCKdlIRQ6w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,489] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:43 kafka | [2025-06-10 17:04:39,489] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:43 kafka | [2025-06-10 17:04:39,490] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,490] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,490] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(dJiK8vdGRrOzlCKdlIRQ6w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,502] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:43 kafka | [2025-06-10 17:04:39,503] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:43 kafka | [2025-06-10 17:04:39,503] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,503] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,503] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(dJiK8vdGRrOzlCKdlIRQ6w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,517] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:43 kafka | [2025-06-10 17:04:39,517] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:43 kafka | [2025-06-10 17:04:39,517] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,518] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,518] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(dJiK8vdGRrOzlCKdlIRQ6w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,524] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:43 kafka | [2025-06-10 17:04:39,525] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:43 kafka | [2025-06-10 17:04:39,525] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,525] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,525] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(dJiK8vdGRrOzlCKdlIRQ6w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,530] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:43 kafka | [2025-06-10 17:04:39,530] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:43 kafka | [2025-06-10 17:04:39,530] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,530] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,531] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(dJiK8vdGRrOzlCKdlIRQ6w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,536] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:43 kafka | [2025-06-10 17:04:39,537] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:43 kafka | [2025-06-10 17:04:39,537] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,537] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,537] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(dJiK8vdGRrOzlCKdlIRQ6w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,546] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:43 kafka | [2025-06-10 17:04:39,546] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:43 kafka | [2025-06-10 17:04:39,546] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,546] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,546] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(dJiK8vdGRrOzlCKdlIRQ6w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,551] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:43 kafka | [2025-06-10 17:04:39,552] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:43 kafka | [2025-06-10 17:04:39,552] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,552] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,552] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(dJiK8vdGRrOzlCKdlIRQ6w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,559] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:43 kafka | [2025-06-10 17:04:39,559] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:43 kafka | [2025-06-10 17:04:39,559] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,559] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,560] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(dJiK8vdGRrOzlCKdlIRQ6w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,571] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:43 kafka | [2025-06-10 17:04:39,572] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:43 kafka | [2025-06-10 17:04:39,572] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,572] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,572] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(dJiK8vdGRrOzlCKdlIRQ6w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,584] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:43 kafka | [2025-06-10 17:04:39,586] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:43 kafka | [2025-06-10 17:04:39,586] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,586] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,586] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(dJiK8vdGRrOzlCKdlIRQ6w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,593] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:43 kafka | [2025-06-10 17:04:39,594] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:43 kafka | [2025-06-10 17:04:39,594] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,594] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,594] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(dJiK8vdGRrOzlCKdlIRQ6w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,601] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:43 kafka | [2025-06-10 17:04:39,602] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:43 kafka | [2025-06-10 17:04:39,602] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,602] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,602] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(dJiK8vdGRrOzlCKdlIRQ6w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,608] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:43 kafka | [2025-06-10 17:04:39,609] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) 17:06:43 kafka | [2025-06-10 17:04:39,609] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,609] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,609] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(EeyNWRw_RqeZh8XoyQ_0yg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,618] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:43 kafka | [2025-06-10 17:04:39,620] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:43 kafka | [2025-06-10 17:04:39,621] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,621] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,621] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(dJiK8vdGRrOzlCKdlIRQ6w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,631] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:43 kafka | [2025-06-10 17:04:39,631] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:43 kafka | [2025-06-10 17:04:39,631] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,631] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,631] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(dJiK8vdGRrOzlCKdlIRQ6w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,642] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:43 kafka | [2025-06-10 17:04:39,643] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:43 kafka | [2025-06-10 17:04:39,643] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,643] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,643] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(dJiK8vdGRrOzlCKdlIRQ6w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,649] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:43 kafka | [2025-06-10 17:04:39,652] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:43 kafka | [2025-06-10 17:04:39,652] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,652] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,652] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(dJiK8vdGRrOzlCKdlIRQ6w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,660] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:43 kafka | [2025-06-10 17:04:39,661] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:43 kafka | [2025-06-10 17:04:39,661] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,661] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,661] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(dJiK8vdGRrOzlCKdlIRQ6w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,670] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:43 kafka | [2025-06-10 17:04:39,671] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:43 kafka | [2025-06-10 17:04:39,671] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,671] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,671] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(dJiK8vdGRrOzlCKdlIRQ6w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,682] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:43 kafka | [2025-06-10 17:04:39,684] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:43 kafka | [2025-06-10 17:04:39,684] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,684] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,685] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(dJiK8vdGRrOzlCKdlIRQ6w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,705] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:43 kafka | [2025-06-10 17:04:39,705] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:43 kafka | [2025-06-10 17:04:39,705] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,705] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,706] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(dJiK8vdGRrOzlCKdlIRQ6w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,714] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:43 kafka | [2025-06-10 17:04:39,715] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:43 kafka | [2025-06-10 17:04:39,715] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,715] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,715] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(dJiK8vdGRrOzlCKdlIRQ6w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,725] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:43 kafka | [2025-06-10 17:04:39,726] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:43 kafka | [2025-06-10 17:04:39,726] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,726] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,726] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(dJiK8vdGRrOzlCKdlIRQ6w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,733] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:43 kafka | [2025-06-10 17:04:39,733] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:43 kafka | [2025-06-10 17:04:39,733] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,733] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,733] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(dJiK8vdGRrOzlCKdlIRQ6w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,740] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:43 kafka | [2025-06-10 17:04:39,741] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:43 kafka | [2025-06-10 17:04:39,741] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,741] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,741] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(dJiK8vdGRrOzlCKdlIRQ6w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,749] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:43 kafka | [2025-06-10 17:04:39,750] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:43 kafka | [2025-06-10 17:04:39,750] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,750] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,750] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(dJiK8vdGRrOzlCKdlIRQ6w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,758] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:43 kafka | [2025-06-10 17:04:39,759] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:43 kafka | [2025-06-10 17:04:39,759] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,759] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,759] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(dJiK8vdGRrOzlCKdlIRQ6w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,768] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:43 kafka | [2025-06-10 17:04:39,769] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:43 kafka | [2025-06-10 17:04:39,769] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,769] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,769] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(dJiK8vdGRrOzlCKdlIRQ6w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,776] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 17:06:43 kafka | [2025-06-10 17:04:39,777] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 17:06:43 kafka | [2025-06-10 17:04:39,777] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,777] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) 17:06:43 kafka | [2025-06-10 17:04:39,777] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(dJiK8vdGRrOzlCKdlIRQ6w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,783] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,783] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,783] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,783] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,783] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,783] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,783] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,783] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,783] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,783] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,783] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,783] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,783] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,783] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,784] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,784] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,784] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,784] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,784] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,784] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,784] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,784] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,784] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,784] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,784] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,784] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,784] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,784] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,784] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,784] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,784] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,784] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,784] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,784] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,784] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,784] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,784] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,784] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,784] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,784] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,784] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,784] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,784] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,784] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,784] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,784] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,784] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,784] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,784] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,784] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,784] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,789] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:43 kafka | [2025-06-10 17:04:39,790] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,791] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:43 kafka | [2025-06-10 17:04:39,791] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,791] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:43 kafka | [2025-06-10 17:04:39,791] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,791] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:43 kafka | [2025-06-10 17:04:39,791] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,791] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:43 kafka | [2025-06-10 17:04:39,791] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,791] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:43 kafka | [2025-06-10 17:04:39,791] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,791] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:43 kafka | [2025-06-10 17:04:39,791] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,791] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:43 kafka | [2025-06-10 17:04:39,791] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,791] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:43 kafka | [2025-06-10 17:04:39,791] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,791] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:43 kafka | [2025-06-10 17:04:39,791] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,791] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:43 kafka | [2025-06-10 17:04:39,791] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,791] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:43 kafka | [2025-06-10 17:04:39,791] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,791] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:43 kafka | [2025-06-10 17:04:39,791] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,791] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:43 kafka | [2025-06-10 17:04:39,791] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,791] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:43 kafka | [2025-06-10 17:04:39,791] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,791] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:43 kafka | [2025-06-10 17:04:39,791] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,791] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:43 kafka | [2025-06-10 17:04:39,791] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,791] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:43 kafka | [2025-06-10 17:04:39,791] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,791] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:43 kafka | [2025-06-10 17:04:39,791] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,791] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:43 kafka | [2025-06-10 17:04:39,791] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,792] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:43 kafka | [2025-06-10 17:04:39,792] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,792] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:43 kafka | [2025-06-10 17:04:39,792] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,792] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:43 kafka | [2025-06-10 17:04:39,792] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,792] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:43 kafka | [2025-06-10 17:04:39,792] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,792] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:43 kafka | [2025-06-10 17:04:39,792] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,792] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:43 kafka | [2025-06-10 17:04:39,792] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,792] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:43 kafka | [2025-06-10 17:04:39,792] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,792] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:43 kafka | [2025-06-10 17:04:39,792] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,792] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:43 kafka | [2025-06-10 17:04:39,792] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,792] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:43 kafka | [2025-06-10 17:04:39,792] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,792] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:43 kafka | [2025-06-10 17:04:39,792] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,792] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:43 kafka | [2025-06-10 17:04:39,792] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,792] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:43 kafka | [2025-06-10 17:04:39,792] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,792] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:43 kafka | [2025-06-10 17:04:39,792] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,792] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:43 kafka | [2025-06-10 17:04:39,792] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,792] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:43 kafka | [2025-06-10 17:04:39,792] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,792] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:43 kafka | [2025-06-10 17:04:39,792] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,792] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:43 kafka | [2025-06-10 17:04:39,792] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,792] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:43 kafka | [2025-06-10 17:04:39,792] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,792] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:43 kafka | [2025-06-10 17:04:39,792] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,792] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:43 kafka | [2025-06-10 17:04:39,792] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,792] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:43 kafka | [2025-06-10 17:04:39,792] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,792] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:43 kafka | [2025-06-10 17:04:39,792] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,792] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:43 kafka | [2025-06-10 17:04:39,792] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,792] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:43 kafka | [2025-06-10 17:04:39,792] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,792] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:43 kafka | [2025-06-10 17:04:39,792] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,792] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:43 kafka | [2025-06-10 17:04:39,792] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,792] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:43 kafka | [2025-06-10 17:04:39,792] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,792] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:43 kafka | [2025-06-10 17:04:39,792] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,792] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 17:06:43 kafka | [2025-06-10 17:04:39,792] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,794] INFO [Broker id=1] Finished LeaderAndIsr request in 657ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,798] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=dJiK8vdGRrOzlCKdlIRQ6w, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=EeyNWRw_RqeZh8XoyQ_0yg, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,800] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 9 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,801] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,801] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,801] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,801] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,801] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,801] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,801] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,801] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,801] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,801] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,802] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,802] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,802] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,802] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,802] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,802] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,803] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 12 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,803] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,803] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,803] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,803] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,803] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,803] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,804] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,804] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,804] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,804] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,804] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,804] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,804] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,804] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,805] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,805] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,805] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,805] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,805] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,805] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,805] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,805] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,805] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,805] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,805] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,806] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 14 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,806] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,806] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,806] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,806] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,806] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,806] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 17:06:43 kafka | [2025-06-10 17:04:39,807] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,807] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,807] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,807] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,807] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,807] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,807] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,807] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,807] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,807] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,808] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,808] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,808] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,808] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,808] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,808] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,808] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,808] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,808] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,808] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,808] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,808] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,808] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,809] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,809] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,809] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,809] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,809] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,809] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,809] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,809] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,809] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,809] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,809] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,809] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,809] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,810] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,810] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,810] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,810] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,810] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,810] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,810] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,810] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,810] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,810] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,810] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,810] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,811] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,811] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,811] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,812] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,813] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 17:06:43 kafka | [2025-06-10 17:04:39,871] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-c2780152-0252-4655-a5ec-93d6bd33a775 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 17:06:43 kafka | [2025-06-10 17:04:39,884] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-c2780152-0252-4655-a5ec-93d6bd33a775 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 17:06:43 kafka | [2025-06-10 17:04:39,915] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group ad57df67-fea3-4cf4-a72d-c4d0b25956e5 in Empty state. Created a new member id consumer-ad57df67-fea3-4cf4-a72d-c4d0b25956e5-3-f6bd6332-99bb-45a3-816c-1d1da56a81f1 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 17:06:43 kafka | [2025-06-10 17:04:39,918] INFO [GroupCoordinator 1]: Preparing to rebalance group ad57df67-fea3-4cf4-a72d-c4d0b25956e5 in state PreparingRebalance with old generation 0 (__consumer_offsets-48) (reason: Adding new member consumer-ad57df67-fea3-4cf4-a72d-c4d0b25956e5-3-f6bd6332-99bb-45a3-816c-1d1da56a81f1 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 17:06:43 kafka | [2025-06-10 17:04:40,102] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 9a6ef412-4cae-425f-9a20-d948e02a7e10 in Empty state. Created a new member id consumer-9a6ef412-4cae-425f-9a20-d948e02a7e10-2-c8ca7174-9395-47a1-9575-f223c3ed5c83 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 17:06:43 kafka | [2025-06-10 17:04:40,106] INFO [GroupCoordinator 1]: Preparing to rebalance group 9a6ef412-4cae-425f-9a20-d948e02a7e10 in state PreparingRebalance with old generation 0 (__consumer_offsets-16) (reason: Adding new member consumer-9a6ef412-4cae-425f-9a20-d948e02a7e10-2-c8ca7174-9395-47a1-9575-f223c3ed5c83 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 17:06:43 kafka | [2025-06-10 17:04:42,892] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) 17:06:43 kafka | [2025-06-10 17:04:42,919] INFO [GroupCoordinator 1]: Stabilized group ad57df67-fea3-4cf4-a72d-c4d0b25956e5 generation 1 (__consumer_offsets-48) with 1 members (kafka.coordinator.group.GroupCoordinator) 17:06:43 kafka | [2025-06-10 17:04:42,919] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-c2780152-0252-4655-a5ec-93d6bd33a775 for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 17:06:43 kafka | [2025-06-10 17:04:42,922] INFO [GroupCoordinator 1]: Assignment received from leader consumer-ad57df67-fea3-4cf4-a72d-c4d0b25956e5-3-f6bd6332-99bb-45a3-816c-1d1da56a81f1 for group ad57df67-fea3-4cf4-a72d-c4d0b25956e5 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 17:06:43 kafka | [2025-06-10 17:04:43,106] INFO [GroupCoordinator 1]: Stabilized group 9a6ef412-4cae-425f-9a20-d948e02a7e10 generation 1 (__consumer_offsets-16) with 1 members (kafka.coordinator.group.GroupCoordinator) 17:06:43 kafka | [2025-06-10 17:04:43,119] INFO [GroupCoordinator 1]: Assignment received from leader consumer-9a6ef412-4cae-425f-9a20-d948e02a7e10-2-c8ca7174-9395-47a1-9575-f223c3ed5c83 for group 9a6ef412-4cae-425f-9a20-d948e02a7e10 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 17:06:43 =================================== 17:06:43 ======== Logs from mariadb ======== 17:06:43 mariadb | 2025-06-10 17:04:01+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 17:06:43 mariadb | 2025-06-10 17:04:01+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' 17:06:43 mariadb | 2025-06-10 17:04:01+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 17:06:43 mariadb | 2025-06-10 17:04:02+00:00 [Note] [Entrypoint]: Initializing database files 17:06:43 mariadb | 2025-06-10 17:04:02 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 17:06:43 mariadb | 2025-06-10 17:04:02 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 17:06:43 mariadb | 2025-06-10 17:04:02 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 17:06:43 mariadb | 17:06:43 mariadb | 17:06:43 mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! 17:06:43 mariadb | To do so, start the server, then issue the following command: 17:06:43 mariadb | 17:06:43 mariadb | '/usr/bin/mysql_secure_installation' 17:06:43 mariadb | 17:06:43 mariadb | which will also give you the option of removing the test 17:06:43 mariadb | databases and anonymous user created by default. This is 17:06:43 mariadb | strongly recommended for production servers. 17:06:43 mariadb | 17:06:43 mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb 17:06:43 mariadb | 17:06:43 mariadb | Please report any problems at https://mariadb.org/jira 17:06:43 mariadb | 17:06:43 mariadb | The latest information about MariaDB is available at https://mariadb.org/. 17:06:43 mariadb | 17:06:43 mariadb | Consider joining MariaDB's strong and vibrant community: 17:06:43 mariadb | https://mariadb.org/get-involved/ 17:06:43 mariadb | 17:06:43 mariadb | 2025-06-10 17:04:03+00:00 [Note] [Entrypoint]: Database files initialized 17:06:43 mariadb | 2025-06-10 17:04:03+00:00 [Note] [Entrypoint]: Starting temporary server 17:06:43 mariadb | 2025-06-10 17:04:03+00:00 [Note] [Entrypoint]: Waiting for server startup 17:06:43 mariadb | 2025-06-10 17:04:03 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 95 ... 17:06:43 mariadb | 2025-06-10 17:04:03 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 17:06:43 mariadb | 2025-06-10 17:04:03 0 [Note] InnoDB: Number of transaction pools: 1 17:06:43 mariadb | 2025-06-10 17:04:03 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 17:06:43 mariadb | 2025-06-10 17:04:03 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 17:06:43 mariadb | 2025-06-10 17:04:03 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 17:06:43 mariadb | 2025-06-10 17:04:03 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 17:06:43 mariadb | 2025-06-10 17:04:03 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 17:06:43 mariadb | 2025-06-10 17:04:03 0 [Note] InnoDB: Completed initialization of buffer pool 17:06:43 mariadb | 2025-06-10 17:04:03 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 17:06:43 mariadb | 2025-06-10 17:04:03 0 [Note] InnoDB: 128 rollback segments are active. 17:06:43 mariadb | 2025-06-10 17:04:03 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 17:06:43 mariadb | 2025-06-10 17:04:03 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 17:06:43 mariadb | 2025-06-10 17:04:03 0 [Note] InnoDB: log sequence number 46590; transaction id 14 17:06:43 mariadb | 2025-06-10 17:04:03 0 [Note] Plugin 'FEEDBACK' is disabled. 17:06:43 mariadb | 2025-06-10 17:04:03 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 17:06:43 mariadb | 2025-06-10 17:04:03 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. 17:06:43 mariadb | 2025-06-10 17:04:03 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. 17:06:43 mariadb | 2025-06-10 17:04:03 0 [Note] mariadbd: ready for connections. 17:06:43 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution 17:06:43 mariadb | 2025-06-10 17:04:04+00:00 [Note] [Entrypoint]: Temporary server started. 17:06:43 mariadb | 2025-06-10 17:04:06+00:00 [Note] [Entrypoint]: Creating user policy_user 17:06:43 mariadb | 2025-06-10 17:04:06+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) 17:06:43 mariadb | 17:06:43 mariadb | 2025-06-10 17:04:06+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf 17:06:43 mariadb | 17:06:43 mariadb | 2025-06-10 17:04:06+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh 17:06:43 mariadb | #!/bin/bash -xv 17:06:43 mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved 17:06:43 mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. 17:06:43 mariadb | # 17:06:43 mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); 17:06:43 mariadb | # you may not use this file except in compliance with the License. 17:06:43 mariadb | # You may obtain a copy of the License at 17:06:43 mariadb | # 17:06:43 mariadb | # http://www.apache.org/licenses/LICENSE-2.0 17:06:43 mariadb | # 17:06:43 mariadb | # Unless required by applicable law or agreed to in writing, software 17:06:43 mariadb | # distributed under the License is distributed on an "AS IS" BASIS, 17:06:43 mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 17:06:43 mariadb | # See the License for the specific language governing permissions and 17:06:43 mariadb | # limitations under the License. 17:06:43 mariadb | 17:06:43 mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp 17:06:43 mariadb | do 17:06:43 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" 17:06:43 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" 17:06:43 mariadb | done 17:06:43 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 17:06:43 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' 17:06:43 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' 17:06:43 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 17:06:43 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' 17:06:43 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' 17:06:43 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 17:06:43 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' 17:06:43 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' 17:06:43 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 17:06:43 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' 17:06:43 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' 17:06:43 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 17:06:43 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' 17:06:43 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' 17:06:43 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 17:06:43 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' 17:06:43 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' 17:06:43 mariadb | 17:06:43 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" 17:06:43 mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' 17:06:43 mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql 17:06:43 mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp 17:06:43 mariadb | 17:06:43 mariadb | 2025-06-10 17:04:07+00:00 [Note] [Entrypoint]: Stopping temporary server 17:06:43 mariadb | 2025-06-10 17:04:07 0 [Note] mariadbd (initiated by: unknown): Normal shutdown 17:06:43 mariadb | 2025-06-10 17:04:07 0 [Note] InnoDB: FTS optimize thread exiting. 17:06:43 mariadb | 2025-06-10 17:04:07 0 [Note] InnoDB: Starting shutdown... 17:06:43 mariadb | 2025-06-10 17:04:07 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool 17:06:43 mariadb | 2025-06-10 17:04:07 0 [Note] InnoDB: Buffer pool(s) dump completed at 250610 17:04:07 17:06:43 mariadb | 2025-06-10 17:04:07 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" 17:06:43 mariadb | 2025-06-10 17:04:07 0 [Note] InnoDB: Shutdown completed; log sequence number 329026; transaction id 298 17:06:43 mariadb | 2025-06-10 17:04:07 0 [Note] mariadbd: Shutdown complete 17:06:43 mariadb | 17:06:43 mariadb | 2025-06-10 17:04:07+00:00 [Note] [Entrypoint]: Temporary server stopped 17:06:43 mariadb | 17:06:43 mariadb | 2025-06-10 17:04:07+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. 17:06:43 mariadb | 17:06:43 mariadb | 2025-06-10 17:04:07 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... 17:06:43 mariadb | 2025-06-10 17:04:07 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 17:06:43 mariadb | 2025-06-10 17:04:07 0 [Note] InnoDB: Number of transaction pools: 1 17:06:43 mariadb | 2025-06-10 17:04:07 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 17:06:43 mariadb | 2025-06-10 17:04:07 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 17:06:43 mariadb | 2025-06-10 17:04:07 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 17:06:43 mariadb | 2025-06-10 17:04:07 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 17:06:43 mariadb | 2025-06-10 17:04:07 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 17:06:43 mariadb | 2025-06-10 17:04:07 0 [Note] InnoDB: Completed initialization of buffer pool 17:06:43 mariadb | 2025-06-10 17:04:07 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 17:06:43 mariadb | 2025-06-10 17:04:08 0 [Note] InnoDB: 128 rollback segments are active. 17:06:43 mariadb | 2025-06-10 17:04:08 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 17:06:43 mariadb | 2025-06-10 17:04:08 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 17:06:43 mariadb | 2025-06-10 17:04:08 0 [Note] InnoDB: log sequence number 329026; transaction id 299 17:06:43 mariadb | 2025-06-10 17:04:08 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool 17:06:43 mariadb | 2025-06-10 17:04:08 0 [Note] Plugin 'FEEDBACK' is disabled. 17:06:43 mariadb | 2025-06-10 17:04:08 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 17:06:43 mariadb | 2025-06-10 17:04:08 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. 17:06:43 mariadb | 2025-06-10 17:04:08 0 [Note] Server socket created on IP: '0.0.0.0'. 17:06:43 mariadb | 2025-06-10 17:04:08 0 [Note] Server socket created on IP: '::'. 17:06:43 mariadb | 2025-06-10 17:04:08 0 [Note] mariadbd: ready for connections. 17:06:43 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution 17:06:43 mariadb | 2025-06-10 17:04:08 0 [Note] InnoDB: Buffer pool(s) load completed at 250610 17:04:08 17:06:43 mariadb | 2025-06-10 17:04:08 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.11' (This connection closed normally without authentication) 17:06:43 mariadb | 2025-06-10 17:04:08 4 [Warning] Aborted connection 4 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.10' (This connection closed normally without authentication) 17:06:43 mariadb | 2025-06-10 17:04:08 5 [Warning] Aborted connection 5 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.9' (This connection closed normally without authentication) 17:06:43 mariadb | 2025-06-10 17:04:08 6 [Warning] Aborted connection 6 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.8' (This connection closed normally without authentication) 17:06:43 =================================== 17:06:43 ======== Logs from apex-pdp ======== 17:06:43 policy-apex-pdp | Waiting for mariadb port 3306... 17:06:43 policy-apex-pdp | mariadb (172.17.0.5:3306) open 17:06:43 policy-apex-pdp | Waiting for kafka port 9092... 17:06:43 policy-apex-pdp | kafka (172.17.0.7:9092) open 17:06:43 policy-apex-pdp | Waiting for pap port 6969... 17:06:43 policy-apex-pdp | pap (172.17.0.10:6969) open 17:06:43 policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' 17:06:43 policy-apex-pdp | [2025-06-10T17:04:39.182+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] 17:06:43 policy-apex-pdp | [2025-06-10T17:04:39.358+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 17:06:43 policy-apex-pdp | allow.auto.create.topics = true 17:06:43 policy-apex-pdp | auto.commit.interval.ms = 5000 17:06:43 policy-apex-pdp | auto.include.jmx.reporter = true 17:06:43 policy-apex-pdp | auto.offset.reset = latest 17:06:43 policy-apex-pdp | bootstrap.servers = [kafka:9092] 17:06:43 policy-apex-pdp | check.crcs = true 17:06:43 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 17:06:43 policy-apex-pdp | client.id = consumer-9a6ef412-4cae-425f-9a20-d948e02a7e10-1 17:06:43 policy-apex-pdp | client.rack = 17:06:43 policy-apex-pdp | connections.max.idle.ms = 540000 17:06:43 policy-apex-pdp | default.api.timeout.ms = 60000 17:06:43 policy-apex-pdp | enable.auto.commit = true 17:06:43 policy-apex-pdp | exclude.internal.topics = true 17:06:43 policy-apex-pdp | fetch.max.bytes = 52428800 17:06:43 policy-apex-pdp | fetch.max.wait.ms = 500 17:06:43 policy-apex-pdp | fetch.min.bytes = 1 17:06:43 policy-apex-pdp | group.id = 9a6ef412-4cae-425f-9a20-d948e02a7e10 17:06:43 policy-apex-pdp | group.instance.id = null 17:06:43 policy-apex-pdp | heartbeat.interval.ms = 3000 17:06:43 policy-apex-pdp | interceptor.classes = [] 17:06:43 policy-apex-pdp | internal.leave.group.on.close = true 17:06:43 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 17:06:43 policy-apex-pdp | isolation.level = read_uncommitted 17:06:43 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:06:43 policy-apex-pdp | max.partition.fetch.bytes = 1048576 17:06:43 policy-apex-pdp | max.poll.interval.ms = 300000 17:06:43 policy-apex-pdp | max.poll.records = 500 17:06:43 policy-apex-pdp | metadata.max.age.ms = 300000 17:06:43 policy-apex-pdp | metric.reporters = [] 17:06:43 policy-apex-pdp | metrics.num.samples = 2 17:06:43 policy-apex-pdp | metrics.recording.level = INFO 17:06:43 policy-apex-pdp | metrics.sample.window.ms = 30000 17:06:43 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 17:06:43 policy-apex-pdp | receive.buffer.bytes = 65536 17:06:43 policy-apex-pdp | reconnect.backoff.max.ms = 1000 17:06:43 policy-apex-pdp | reconnect.backoff.ms = 50 17:06:43 policy-apex-pdp | request.timeout.ms = 30000 17:06:43 policy-apex-pdp | retry.backoff.ms = 100 17:06:43 policy-apex-pdp | sasl.client.callback.handler.class = null 17:06:43 policy-apex-pdp | sasl.jaas.config = null 17:06:43 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 17:06:43 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 17:06:43 policy-apex-pdp | sasl.kerberos.service.name = null 17:06:43 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 17:06:43 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 17:06:43 policy-apex-pdp | sasl.login.callback.handler.class = null 17:06:43 policy-apex-pdp | sasl.login.class = null 17:06:43 policy-apex-pdp | sasl.login.connect.timeout.ms = null 17:06:43 policy-apex-pdp | sasl.login.read.timeout.ms = null 17:06:43 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 17:06:43 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 17:06:43 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 17:06:43 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 17:06:43 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 17:06:43 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 17:06:43 policy-apex-pdp | sasl.mechanism = GSSAPI 17:06:43 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 17:06:43 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 17:06:43 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 17:06:43 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 17:06:43 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 17:06:43 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 17:06:43 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 17:06:43 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 17:06:43 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 17:06:43 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 17:06:43 policy-apex-pdp | security.protocol = PLAINTEXT 17:06:43 policy-apex-pdp | security.providers = null 17:06:43 policy-apex-pdp | send.buffer.bytes = 131072 17:06:43 policy-apex-pdp | session.timeout.ms = 45000 17:06:43 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 17:06:43 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 17:06:43 policy-apex-pdp | ssl.cipher.suites = null 17:06:43 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 17:06:43 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 17:06:43 policy-apex-pdp | ssl.engine.factory.class = null 17:06:43 policy-apex-pdp | ssl.key.password = null 17:06:43 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 17:06:43 policy-apex-pdp | ssl.keystore.certificate.chain = null 17:06:43 policy-apex-pdp | ssl.keystore.key = null 17:06:43 policy-apex-pdp | ssl.keystore.location = null 17:06:43 policy-apex-pdp | ssl.keystore.password = null 17:06:43 policy-apex-pdp | ssl.keystore.type = JKS 17:06:43 policy-apex-pdp | ssl.protocol = TLSv1.3 17:06:43 policy-apex-pdp | ssl.provider = null 17:06:43 policy-apex-pdp | ssl.secure.random.implementation = null 17:06:43 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 17:06:43 policy-apex-pdp | ssl.truststore.certificates = null 17:06:43 policy-apex-pdp | ssl.truststore.location = null 17:06:43 policy-apex-pdp | ssl.truststore.password = null 17:06:43 policy-apex-pdp | ssl.truststore.type = JKS 17:06:43 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:06:43 policy-apex-pdp | 17:06:43 policy-apex-pdp | [2025-06-10T17:04:39.544+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 17:06:43 policy-apex-pdp | [2025-06-10T17:04:39.544+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 17:06:43 policy-apex-pdp | [2025-06-10T17:04:39.544+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749575079542 17:06:43 policy-apex-pdp | [2025-06-10T17:04:39.547+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-9a6ef412-4cae-425f-9a20-d948e02a7e10-1, groupId=9a6ef412-4cae-425f-9a20-d948e02a7e10] Subscribed to topic(s): policy-pdp-pap 17:06:43 policy-apex-pdp | [2025-06-10T17:04:39.561+00:00|INFO|ServiceManager|main] service manager starting 17:06:43 policy-apex-pdp | [2025-06-10T17:04:39.561+00:00|INFO|ServiceManager|main] service manager starting topics 17:06:43 policy-apex-pdp | [2025-06-10T17:04:39.563+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=9a6ef412-4cae-425f-9a20-d948e02a7e10, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting 17:06:43 policy-apex-pdp | [2025-06-10T17:04:39.592+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 17:06:43 policy-apex-pdp | allow.auto.create.topics = true 17:06:43 policy-apex-pdp | auto.commit.interval.ms = 5000 17:06:43 policy-apex-pdp | auto.include.jmx.reporter = true 17:06:43 policy-apex-pdp | auto.offset.reset = latest 17:06:43 policy-apex-pdp | bootstrap.servers = [kafka:9092] 17:06:43 policy-apex-pdp | check.crcs = true 17:06:43 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 17:06:43 policy-apex-pdp | client.id = consumer-9a6ef412-4cae-425f-9a20-d948e02a7e10-2 17:06:43 policy-apex-pdp | client.rack = 17:06:43 policy-apex-pdp | connections.max.idle.ms = 540000 17:06:43 policy-apex-pdp | default.api.timeout.ms = 60000 17:06:43 policy-apex-pdp | enable.auto.commit = true 17:06:43 policy-apex-pdp | exclude.internal.topics = true 17:06:43 policy-apex-pdp | fetch.max.bytes = 52428800 17:06:43 policy-apex-pdp | fetch.max.wait.ms = 500 17:06:43 policy-apex-pdp | fetch.min.bytes = 1 17:06:43 policy-apex-pdp | group.id = 9a6ef412-4cae-425f-9a20-d948e02a7e10 17:06:43 policy-apex-pdp | group.instance.id = null 17:06:43 policy-apex-pdp | heartbeat.interval.ms = 3000 17:06:43 policy-apex-pdp | interceptor.classes = [] 17:06:43 policy-apex-pdp | internal.leave.group.on.close = true 17:06:43 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 17:06:43 policy-apex-pdp | isolation.level = read_uncommitted 17:06:43 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:06:43 policy-apex-pdp | max.partition.fetch.bytes = 1048576 17:06:43 policy-apex-pdp | max.poll.interval.ms = 300000 17:06:43 policy-apex-pdp | max.poll.records = 500 17:06:43 policy-apex-pdp | metadata.max.age.ms = 300000 17:06:43 policy-apex-pdp | metric.reporters = [] 17:06:43 policy-apex-pdp | metrics.num.samples = 2 17:06:43 policy-apex-pdp | metrics.recording.level = INFO 17:06:43 policy-apex-pdp | metrics.sample.window.ms = 30000 17:06:43 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 17:06:43 policy-apex-pdp | receive.buffer.bytes = 65536 17:06:43 policy-apex-pdp | reconnect.backoff.max.ms = 1000 17:06:43 policy-apex-pdp | reconnect.backoff.ms = 50 17:06:43 policy-apex-pdp | request.timeout.ms = 30000 17:06:43 policy-apex-pdp | retry.backoff.ms = 100 17:06:43 policy-apex-pdp | sasl.client.callback.handler.class = null 17:06:43 policy-apex-pdp | sasl.jaas.config = null 17:06:43 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 17:06:43 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 17:06:43 policy-apex-pdp | sasl.kerberos.service.name = null 17:06:43 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 17:06:43 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 17:06:43 policy-apex-pdp | sasl.login.callback.handler.class = null 17:06:43 policy-apex-pdp | sasl.login.class = null 17:06:43 policy-apex-pdp | sasl.login.connect.timeout.ms = null 17:06:43 policy-apex-pdp | sasl.login.read.timeout.ms = null 17:06:43 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 17:06:43 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 17:06:43 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 17:06:43 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 17:06:43 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 17:06:43 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 17:06:43 policy-apex-pdp | sasl.mechanism = GSSAPI 17:06:43 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 17:06:43 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 17:06:43 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 17:06:43 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 17:06:43 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 17:06:43 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 17:06:43 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 17:06:43 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 17:06:43 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 17:06:43 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 17:06:43 policy-apex-pdp | security.protocol = PLAINTEXT 17:06:43 policy-apex-pdp | security.providers = null 17:06:43 policy-apex-pdp | send.buffer.bytes = 131072 17:06:43 policy-apex-pdp | session.timeout.ms = 45000 17:06:43 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 17:06:43 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 17:06:43 policy-apex-pdp | ssl.cipher.suites = null 17:06:43 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 17:06:43 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 17:06:43 policy-apex-pdp | ssl.engine.factory.class = null 17:06:43 policy-apex-pdp | ssl.key.password = null 17:06:43 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 17:06:43 policy-apex-pdp | ssl.keystore.certificate.chain = null 17:06:43 policy-apex-pdp | ssl.keystore.key = null 17:06:43 policy-apex-pdp | ssl.keystore.location = null 17:06:43 policy-apex-pdp | ssl.keystore.password = null 17:06:43 policy-apex-pdp | ssl.keystore.type = JKS 17:06:43 policy-apex-pdp | ssl.protocol = TLSv1.3 17:06:43 policy-apex-pdp | ssl.provider = null 17:06:43 policy-apex-pdp | ssl.secure.random.implementation = null 17:06:43 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 17:06:43 policy-apex-pdp | ssl.truststore.certificates = null 17:06:43 policy-apex-pdp | ssl.truststore.location = null 17:06:43 policy-apex-pdp | ssl.truststore.password = null 17:06:43 policy-apex-pdp | ssl.truststore.type = JKS 17:06:43 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:06:43 policy-apex-pdp | 17:06:43 policy-apex-pdp | [2025-06-10T17:04:39.603+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 17:06:43 policy-apex-pdp | [2025-06-10T17:04:39.603+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 17:06:43 policy-apex-pdp | [2025-06-10T17:04:39.603+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749575079603 17:06:43 policy-apex-pdp | [2025-06-10T17:04:39.603+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-9a6ef412-4cae-425f-9a20-d948e02a7e10-2, groupId=9a6ef412-4cae-425f-9a20-d948e02a7e10] Subscribed to topic(s): policy-pdp-pap 17:06:43 policy-apex-pdp | [2025-06-10T17:04:39.608+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=78112db9-c573-4d48-b7f9-08447a5637ea, alive=false, publisher=null]]: starting 17:06:43 policy-apex-pdp | [2025-06-10T17:04:39.625+00:00|INFO|ProducerConfig|main] ProducerConfig values: 17:06:43 policy-apex-pdp | acks = -1 17:06:43 policy-apex-pdp | auto.include.jmx.reporter = true 17:06:43 policy-apex-pdp | batch.size = 16384 17:06:43 policy-apex-pdp | bootstrap.servers = [kafka:9092] 17:06:43 policy-apex-pdp | buffer.memory = 33554432 17:06:43 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 17:06:43 policy-apex-pdp | client.id = producer-1 17:06:43 policy-apex-pdp | compression.type = none 17:06:43 policy-apex-pdp | connections.max.idle.ms = 540000 17:06:43 policy-apex-pdp | delivery.timeout.ms = 120000 17:06:43 policy-apex-pdp | enable.idempotence = true 17:06:43 policy-apex-pdp | interceptor.classes = [] 17:06:43 policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 17:06:43 policy-apex-pdp | linger.ms = 0 17:06:43 policy-apex-pdp | max.block.ms = 60000 17:06:43 policy-apex-pdp | max.in.flight.requests.per.connection = 5 17:06:43 policy-apex-pdp | max.request.size = 1048576 17:06:43 policy-apex-pdp | metadata.max.age.ms = 300000 17:06:43 policy-apex-pdp | metadata.max.idle.ms = 300000 17:06:43 policy-apex-pdp | metric.reporters = [] 17:06:43 policy-apex-pdp | metrics.num.samples = 2 17:06:43 policy-apex-pdp | metrics.recording.level = INFO 17:06:43 policy-apex-pdp | metrics.sample.window.ms = 30000 17:06:43 policy-apex-pdp | partitioner.adaptive.partitioning.enable = true 17:06:43 policy-apex-pdp | partitioner.availability.timeout.ms = 0 17:06:43 policy-apex-pdp | partitioner.class = null 17:06:43 policy-apex-pdp | partitioner.ignore.keys = false 17:06:43 policy-apex-pdp | receive.buffer.bytes = 32768 17:06:43 policy-apex-pdp | reconnect.backoff.max.ms = 1000 17:06:43 policy-apex-pdp | reconnect.backoff.ms = 50 17:06:43 policy-apex-pdp | request.timeout.ms = 30000 17:06:43 policy-apex-pdp | retries = 2147483647 17:06:43 policy-apex-pdp | retry.backoff.ms = 100 17:06:43 policy-apex-pdp | sasl.client.callback.handler.class = null 17:06:43 policy-apex-pdp | sasl.jaas.config = null 17:06:43 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 17:06:43 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 17:06:43 policy-apex-pdp | sasl.kerberos.service.name = null 17:06:43 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 17:06:43 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 17:06:43 policy-apex-pdp | sasl.login.callback.handler.class = null 17:06:43 policy-apex-pdp | sasl.login.class = null 17:06:43 policy-apex-pdp | sasl.login.connect.timeout.ms = null 17:06:43 policy-apex-pdp | sasl.login.read.timeout.ms = null 17:06:43 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 17:06:43 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 17:06:43 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 17:06:43 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 17:06:43 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 17:06:43 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 17:06:43 policy-apex-pdp | sasl.mechanism = GSSAPI 17:06:43 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 17:06:43 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 17:06:43 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 17:06:43 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 17:06:43 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 17:06:43 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 17:06:43 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 17:06:43 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 17:06:43 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 17:06:43 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 17:06:43 policy-apex-pdp | security.protocol = PLAINTEXT 17:06:43 policy-apex-pdp | security.providers = null 17:06:43 policy-apex-pdp | send.buffer.bytes = 131072 17:06:43 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 17:06:43 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 17:06:43 policy-apex-pdp | ssl.cipher.suites = null 17:06:43 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 17:06:43 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 17:06:43 policy-apex-pdp | ssl.engine.factory.class = null 17:06:43 policy-apex-pdp | ssl.key.password = null 17:06:43 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 17:06:43 policy-apex-pdp | ssl.keystore.certificate.chain = null 17:06:43 policy-apex-pdp | ssl.keystore.key = null 17:06:43 policy-apex-pdp | ssl.keystore.location = null 17:06:43 policy-apex-pdp | ssl.keystore.password = null 17:06:43 policy-apex-pdp | ssl.keystore.type = JKS 17:06:43 policy-apex-pdp | ssl.protocol = TLSv1.3 17:06:43 policy-apex-pdp | ssl.provider = null 17:06:43 policy-apex-pdp | ssl.secure.random.implementation = null 17:06:43 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 17:06:43 policy-apex-pdp | ssl.truststore.certificates = null 17:06:43 policy-apex-pdp | ssl.truststore.location = null 17:06:43 policy-apex-pdp | ssl.truststore.password = null 17:06:43 policy-apex-pdp | ssl.truststore.type = JKS 17:06:43 policy-apex-pdp | transaction.timeout.ms = 60000 17:06:43 policy-apex-pdp | transactional.id = null 17:06:43 policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 17:06:43 policy-apex-pdp | 17:06:43 policy-apex-pdp | [2025-06-10T17:04:39.635+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 17:06:43 policy-apex-pdp | [2025-06-10T17:04:39.651+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 17:06:43 policy-apex-pdp | [2025-06-10T17:04:39.651+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 17:06:43 policy-apex-pdp | [2025-06-10T17:04:39.651+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749575079651 17:06:43 policy-apex-pdp | [2025-06-10T17:04:39.652+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=78112db9-c573-4d48-b7f9-08447a5637ea, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 17:06:43 policy-apex-pdp | [2025-06-10T17:04:39.652+00:00|INFO|ServiceManager|main] service manager starting set alive 17:06:43 policy-apex-pdp | [2025-06-10T17:04:39.652+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object 17:06:43 policy-apex-pdp | [2025-06-10T17:04:39.654+00:00|INFO|ServiceManager|main] service manager starting topic sinks 17:06:43 policy-apex-pdp | [2025-06-10T17:04:39.654+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher 17:06:43 policy-apex-pdp | [2025-06-10T17:04:39.660+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener 17:06:43 policy-apex-pdp | [2025-06-10T17:04:39.660+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher 17:06:43 policy-apex-pdp | [2025-06-10T17:04:39.660+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher 17:06:43 policy-apex-pdp | [2025-06-10T17:04:39.660+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=9a6ef412-4cae-425f-9a20-d948e02a7e10, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@60a2630a 17:06:43 policy-apex-pdp | [2025-06-10T17:04:39.661+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=9a6ef412-4cae-425f-9a20-d948e02a7e10, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted 17:06:43 policy-apex-pdp | [2025-06-10T17:04:39.661+00:00|INFO|ServiceManager|main] service manager starting Create REST server 17:06:43 policy-apex-pdp | [2025-06-10T17:04:39.692+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: 17:06:43 policy-apex-pdp | [] 17:06:43 policy-apex-pdp | [2025-06-10T17:04:39.697+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 17:06:43 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"6e0f1e2a-4695-497e-9742-7a27450792f4","timestampMs":1749575079661,"name":"apex-b381139c-3990-4b87-838c-2cb3399159c9","pdpGroup":"defaultGroup"} 17:06:43 policy-apex-pdp | [2025-06-10T17:04:39.900+00:00|INFO|ServiceManager|main] service manager starting Rest Server 17:06:43 policy-apex-pdp | [2025-06-10T17:04:39.900+00:00|INFO|ServiceManager|main] service manager starting 17:06:43 policy-apex-pdp | [2025-06-10T17:04:39.901+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters 17:06:43 policy-apex-pdp | [2025-06-10T17:04:39.901+00:00|INFO|JettyServletServer|main] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@72c927f1{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@1ac85b0c{/,null,STOPPED}, connector=RestServerParameters@63c5efee{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 17:06:43 policy-apex-pdp | [2025-06-10T17:04:39.913+00:00|INFO|ServiceManager|main] service manager started 17:06:43 policy-apex-pdp | [2025-06-10T17:04:39.913+00:00|INFO|ServiceManager|main] service manager started 17:06:43 policy-apex-pdp | [2025-06-10T17:04:39.913+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. 17:06:43 policy-apex-pdp | [2025-06-10T17:04:39.913+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@72c927f1{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@1ac85b0c{/,null,STOPPED}, connector=RestServerParameters@63c5efee{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 17:06:43 policy-apex-pdp | [2025-06-10T17:04:40.078+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9a6ef412-4cae-425f-9a20-d948e02a7e10-2, groupId=9a6ef412-4cae-425f-9a20-d948e02a7e10] Cluster ID: oo1sME1NQySYCt7KlFcuVQ 17:06:43 policy-apex-pdp | [2025-06-10T17:04:40.078+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: oo1sME1NQySYCt7KlFcuVQ 17:06:43 policy-apex-pdp | [2025-06-10T17:04:40.080+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 17:06:43 policy-apex-pdp | [2025-06-10T17:04:40.080+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9a6ef412-4cae-425f-9a20-d948e02a7e10-2, groupId=9a6ef412-4cae-425f-9a20-d948e02a7e10] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 17:06:43 policy-apex-pdp | [2025-06-10T17:04:40.087+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9a6ef412-4cae-425f-9a20-d948e02a7e10-2, groupId=9a6ef412-4cae-425f-9a20-d948e02a7e10] (Re-)joining group 17:06:43 policy-apex-pdp | [2025-06-10T17:04:40.104+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9a6ef412-4cae-425f-9a20-d948e02a7e10-2, groupId=9a6ef412-4cae-425f-9a20-d948e02a7e10] Request joining group due to: need to re-join with the given member-id: consumer-9a6ef412-4cae-425f-9a20-d948e02a7e10-2-c8ca7174-9395-47a1-9575-f223c3ed5c83 17:06:43 policy-apex-pdp | [2025-06-10T17:04:40.104+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9a6ef412-4cae-425f-9a20-d948e02a7e10-2, groupId=9a6ef412-4cae-425f-9a20-d948e02a7e10] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 17:06:43 policy-apex-pdp | [2025-06-10T17:04:40.104+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9a6ef412-4cae-425f-9a20-d948e02a7e10-2, groupId=9a6ef412-4cae-425f-9a20-d948e02a7e10] (Re-)joining group 17:06:43 policy-apex-pdp | [2025-06-10T17:04:40.598+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls 17:06:43 policy-apex-pdp | [2025-06-10T17:04:40.601+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls 17:06:43 policy-apex-pdp | [2025-06-10T17:04:43.108+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9a6ef412-4cae-425f-9a20-d948e02a7e10-2, groupId=9a6ef412-4cae-425f-9a20-d948e02a7e10] Successfully joined group with generation Generation{generationId=1, memberId='consumer-9a6ef412-4cae-425f-9a20-d948e02a7e10-2-c8ca7174-9395-47a1-9575-f223c3ed5c83', protocol='range'} 17:06:43 policy-apex-pdp | [2025-06-10T17:04:43.116+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9a6ef412-4cae-425f-9a20-d948e02a7e10-2, groupId=9a6ef412-4cae-425f-9a20-d948e02a7e10] Finished assignment for group at generation 1: {consumer-9a6ef412-4cae-425f-9a20-d948e02a7e10-2-c8ca7174-9395-47a1-9575-f223c3ed5c83=Assignment(partitions=[policy-pdp-pap-0])} 17:06:43 policy-apex-pdp | [2025-06-10T17:04:43.122+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9a6ef412-4cae-425f-9a20-d948e02a7e10-2, groupId=9a6ef412-4cae-425f-9a20-d948e02a7e10] Successfully synced group in generation Generation{generationId=1, memberId='consumer-9a6ef412-4cae-425f-9a20-d948e02a7e10-2-c8ca7174-9395-47a1-9575-f223c3ed5c83', protocol='range'} 17:06:43 policy-apex-pdp | [2025-06-10T17:04:43.122+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9a6ef412-4cae-425f-9a20-d948e02a7e10-2, groupId=9a6ef412-4cae-425f-9a20-d948e02a7e10] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 17:06:43 policy-apex-pdp | [2025-06-10T17:04:43.124+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9a6ef412-4cae-425f-9a20-d948e02a7e10-2, groupId=9a6ef412-4cae-425f-9a20-d948e02a7e10] Adding newly assigned partitions: policy-pdp-pap-0 17:06:43 policy-apex-pdp | [2025-06-10T17:04:43.130+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9a6ef412-4cae-425f-9a20-d948e02a7e10-2, groupId=9a6ef412-4cae-425f-9a20-d948e02a7e10] Found no committed offset for partition policy-pdp-pap-0 17:06:43 policy-apex-pdp | [2025-06-10T17:04:43.139+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9a6ef412-4cae-425f-9a20-d948e02a7e10-2, groupId=9a6ef412-4cae-425f-9a20-d948e02a7e10] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 17:06:43 policy-apex-pdp | [2025-06-10T17:04:56.182+00:00|INFO|RequestLog|qtp739264372-32] 172.17.0.2 - policyadmin [10/Jun/2025:17:04:56 +0000] "GET /metrics HTTP/1.1" 200 10641 "-" "Prometheus/3.4.1" 17:06:43 policy-apex-pdp | [2025-06-10T17:04:59.661+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 17:06:43 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"e8cf33fa-defb-4478-a917-e939b5903518","timestampMs":1749575099660,"name":"apex-b381139c-3990-4b87-838c-2cb3399159c9","pdpGroup":"defaultGroup"} 17:06:43 policy-apex-pdp | [2025-06-10T17:04:59.682+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:06:43 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"e8cf33fa-defb-4478-a917-e939b5903518","timestampMs":1749575099660,"name":"apex-b381139c-3990-4b87-838c-2cb3399159c9","pdpGroup":"defaultGroup"} 17:06:43 policy-apex-pdp | [2025-06-10T17:04:59.685+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 17:06:43 policy-apex-pdp | [2025-06-10T17:04:59.861+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:06:43 policy-apex-pdp | {"source":"pap-0af370a8-bdce-4da6-9b93-6af73062a6ed","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"99ab4f96-dead-4d11-9a82-912492d03ee3","timestampMs":1749575099796,"name":"apex-b381139c-3990-4b87-838c-2cb3399159c9","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:06:43 policy-apex-pdp | [2025-06-10T17:04:59.870+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher 17:06:43 policy-apex-pdp | [2025-06-10T17:04:59.870+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] 17:06:43 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"774212dd-8554-4351-943c-72a15aedec9e","timestampMs":1749575099870,"name":"apex-b381139c-3990-4b87-838c-2cb3399159c9","pdpGroup":"defaultGroup"} 17:06:43 policy-apex-pdp | [2025-06-10T17:04:59.872+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 17:06:43 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"99ab4f96-dead-4d11-9a82-912492d03ee3","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"369b6db7-c168-40b8-bf74-c3e97bd3983d","timestampMs":1749575099872,"name":"apex-b381139c-3990-4b87-838c-2cb3399159c9","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:06:43 policy-apex-pdp | [2025-06-10T17:04:59.885+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:06:43 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"774212dd-8554-4351-943c-72a15aedec9e","timestampMs":1749575099870,"name":"apex-b381139c-3990-4b87-838c-2cb3399159c9","pdpGroup":"defaultGroup"} 17:06:43 policy-apex-pdp | [2025-06-10T17:04:59.886+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 17:06:43 policy-apex-pdp | [2025-06-10T17:04:59.891+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:06:43 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"99ab4f96-dead-4d11-9a82-912492d03ee3","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"369b6db7-c168-40b8-bf74-c3e97bd3983d","timestampMs":1749575099872,"name":"apex-b381139c-3990-4b87-838c-2cb3399159c9","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:06:43 policy-apex-pdp | [2025-06-10T17:04:59.891+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 17:06:43 policy-apex-pdp | [2025-06-10T17:04:59.926+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:06:43 policy-apex-pdp | {"source":"pap-0af370a8-bdce-4da6-9b93-6af73062a6ed","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"fbc081ab-e104-4874-a8f3-a6ccd6b5131e","timestampMs":1749575099797,"name":"apex-b381139c-3990-4b87-838c-2cb3399159c9","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:06:43 policy-apex-pdp | [2025-06-10T17:04:59.929+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 17:06:43 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"fbc081ab-e104-4874-a8f3-a6ccd6b5131e","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"84a9992b-f450-4185-947e-0a8fc223e234","timestampMs":1749575099929,"name":"apex-b381139c-3990-4b87-838c-2cb3399159c9","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:06:43 policy-apex-pdp | [2025-06-10T17:04:59.937+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:06:43 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"fbc081ab-e104-4874-a8f3-a6ccd6b5131e","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"84a9992b-f450-4185-947e-0a8fc223e234","timestampMs":1749575099929,"name":"apex-b381139c-3990-4b87-838c-2cb3399159c9","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:06:43 policy-apex-pdp | [2025-06-10T17:04:59.937+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 17:06:43 policy-apex-pdp | [2025-06-10T17:04:59.975+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:06:43 policy-apex-pdp | {"source":"pap-0af370a8-bdce-4da6-9b93-6af73062a6ed","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"6dbd577d-130e-483f-86ef-661fbc249226","timestampMs":1749575099949,"name":"apex-b381139c-3990-4b87-838c-2cb3399159c9","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:06:43 policy-apex-pdp | [2025-06-10T17:04:59.977+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 17:06:43 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"6dbd577d-130e-483f-86ef-661fbc249226","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"50ccf63c-ca05-46ac-9241-090e0f0b34f9","timestampMs":1749575099977,"name":"apex-b381139c-3990-4b87-838c-2cb3399159c9","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:06:43 policy-apex-pdp | [2025-06-10T17:04:59.987+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:06:43 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"6dbd577d-130e-483f-86ef-661fbc249226","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"50ccf63c-ca05-46ac-9241-090e0f0b34f9","timestampMs":1749575099977,"name":"apex-b381139c-3990-4b87-838c-2cb3399159c9","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:06:43 policy-apex-pdp | [2025-06-10T17:04:59.987+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 17:06:43 policy-apex-pdp | [2025-06-10T17:05:56.081+00:00|INFO|RequestLog|qtp739264372-29] 172.17.0.2 - policyadmin [10/Jun/2025:17:05:56 +0000] "GET /metrics HTTP/1.1" 200 10644 "-" "Prometheus/3.4.1" 17:06:43 =================================== 17:06:43 ======== Logs from api ======== 17:06:43 policy-api | Waiting for mariadb port 3306... 17:06:43 policy-api | mariadb (172.17.0.5:3306) open 17:06:43 policy-api | Waiting for policy-db-migrator port 6824... 17:06:43 policy-api | policy-db-migrator (172.17.0.8:6824) open 17:06:43 policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml 17:06:43 policy-api | 17:06:43 policy-api | . ____ _ __ _ _ 17:06:43 policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 17:06:43 policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 17:06:43 policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 17:06:43 policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / 17:06:43 policy-api | =========|_|==============|___/=/_/_/_/ 17:06:43 policy-api | :: Spring Boot :: (v3.1.10) 17:06:43 policy-api | 17:06:43 policy-api | [2025-06-10T17:04:16.665+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final 17:06:43 policy-api | [2025-06-10T17:04:16.722+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.11 with PID 22 (/app/api.jar started by policy in /opt/app/policy/api/bin) 17:06:43 policy-api | [2025-06-10T17:04:16.723+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" 17:06:43 policy-api | [2025-06-10T17:04:18.670+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 17:06:43 policy-api | [2025-06-10T17:04:18.749+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 69 ms. Found 6 JPA repository interfaces. 17:06:43 policy-api | [2025-06-10T17:04:19.166+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 17:06:43 policy-api | [2025-06-10T17:04:19.167+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 17:06:43 policy-api | [2025-06-10T17:04:19.793+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 17:06:43 policy-api | [2025-06-10T17:04:19.804+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 17:06:43 policy-api | [2025-06-10T17:04:19.808+00:00|INFO|StandardService|main] Starting service [Tomcat] 17:06:43 policy-api | [2025-06-10T17:04:19.808+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.19] 17:06:43 policy-api | [2025-06-10T17:04:19.908+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext 17:06:43 policy-api | [2025-06-10T17:04:19.908+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3116 ms 17:06:43 policy-api | [2025-06-10T17:04:20.326+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 17:06:43 policy-api | [2025-06-10T17:04:20.401+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.2.Final 17:06:43 policy-api | [2025-06-10T17:04:20.454+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 17:06:43 policy-api | [2025-06-10T17:04:20.733+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 17:06:43 policy-api | [2025-06-10T17:04:20.766+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 17:06:43 policy-api | [2025-06-10T17:04:20.852+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@288ca5f0 17:06:43 policy-api | [2025-06-10T17:04:20.855+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 17:06:43 policy-api | [2025-06-10T17:04:22.921+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 17:06:43 policy-api | [2025-06-10T17:04:22.924+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 17:06:43 policy-api | [2025-06-10T17:04:23.980+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml 17:06:43 policy-api | [2025-06-10T17:04:24.744+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] 17:06:43 policy-api | [2025-06-10T17:04:25.808+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 17:06:43 policy-api | [2025-06-10T17:04:26.045+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@4901ff51, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@3033e54c, org.springframework.security.web.context.SecurityContextHolderFilter@1ae9cfca, org.springframework.security.web.header.HeaderWriterFilter@8b3ea30, org.springframework.security.web.authentication.logout.LogoutFilter@25b2d26a, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@5b859845, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@1b786da0, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@28062dc2, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@e31d9c2, org.springframework.security.web.access.ExceptionTranslationFilter@231e5af, org.springframework.security.web.access.intercept.AuthorizationFilter@7908e69e] 17:06:43 policy-api | [2025-06-10T17:04:27.017+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 17:06:43 policy-api | [2025-06-10T17:04:27.132+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 17:06:43 policy-api | [2025-06-10T17:04:27.166+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' 17:06:43 policy-api | [2025-06-10T17:04:27.186+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 11.237 seconds (process running for 11.864) 17:06:43 policy-api | [2025-06-10T17:04:39.926+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-1] Initializing Spring DispatcherServlet 'dispatcherServlet' 17:06:43 policy-api | [2025-06-10T17:04:39.926+00:00|INFO|DispatcherServlet|http-nio-6969-exec-1] Initializing Servlet 'dispatcherServlet' 17:06:43 policy-api | [2025-06-10T17:04:39.928+00:00|INFO|DispatcherServlet|http-nio-6969-exec-1] Completed initialization in 2 ms 17:06:43 policy-api | [2025-06-10T17:05:11.768+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-3] ***** OrderedServiceImpl implementers: 17:06:43 policy-api | [] 17:06:43 =================================== 17:06:43 ======== Logs from csit-tests ======== 17:06:43 policy-csit | Invoking the robot tests from: pap-test.robot pap-slas.robot 17:06:43 policy-csit | Run Robot test 17:06:43 policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies 17:06:43 policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates 17:06:43 policy-csit | -v POLICY_API_IP:policy-api:6969 17:06:43 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 17:06:43 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 17:06:43 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 17:06:43 policy-csit | -v APEX_IP:policy-apex-pdp:6969 17:06:43 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 17:06:43 policy-csit | -v KAFKA_IP:kafka:9092 17:06:43 policy-csit | -v PROMETHEUS_IP:prometheus:9090 17:06:43 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 17:06:43 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 17:06:43 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 17:06:43 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 17:06:43 policy-csit | -v TEMP_FOLDER:/tmp/distribution 17:06:43 policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 17:06:43 policy-csit | -v CLAMP_K8S_TEST: 17:06:43 policy-csit | Starting Robot test suites ... 17:06:43 policy-csit | ============================================================================== 17:06:43 policy-csit | Pap-Test & Pap-Slas 17:06:43 policy-csit | ============================================================================== 17:06:43 policy-csit | Pap-Test & Pap-Slas.Pap-Test 17:06:43 policy-csit | ============================================================================== 17:06:43 policy-csit | LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | 17:06:43 policy-csit | ------------------------------------------------------------------------------ 17:06:43 policy-csit | LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | 17:06:43 policy-csit | ------------------------------------------------------------------------------ 17:06:43 policy-csit | LoadNodeTemplates :: Create node templates in database using speci... | PASS | 17:06:43 policy-csit | ------------------------------------------------------------------------------ 17:06:43 policy-csit | Healthcheck :: Verify policy pap health check | PASS | 17:06:43 policy-csit | ------------------------------------------------------------------------------ 17:06:43 policy-csit | Consolidated Healthcheck :: Verify policy consolidated health check | PASS | 17:06:43 policy-csit | ------------------------------------------------------------------------------ 17:06:43 policy-csit | Metrics :: Verify policy pap is exporting prometheus metrics | PASS | 17:06:43 policy-csit | ------------------------------------------------------------------------------ 17:06:43 policy-csit | AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | 17:06:43 policy-csit | ------------------------------------------------------------------------------ 17:06:43 policy-csit | QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | 17:06:43 policy-csit | ------------------------------------------------------------------------------ 17:06:43 policy-csit | ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | 17:06:43 policy-csit | ------------------------------------------------------------------------------ 17:06:43 policy-csit | QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | 17:06:43 policy-csit | ------------------------------------------------------------------------------ 17:06:43 policy-csit | DeployPdpGroups :: Deploy policies in PdpGroups | PASS | 17:06:43 policy-csit | ------------------------------------------------------------------------------ 17:06:43 policy-csit | QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | 17:06:43 policy-csit | ------------------------------------------------------------------------------ 17:06:43 policy-csit | QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | 17:06:43 policy-csit | ------------------------------------------------------------------------------ 17:06:43 policy-csit | QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | 17:06:43 policy-csit | ------------------------------------------------------------------------------ 17:06:43 policy-csit | UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | 17:06:43 policy-csit | ------------------------------------------------------------------------------ 17:06:43 policy-csit | UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | 17:06:43 policy-csit | ------------------------------------------------------------------------------ 17:06:43 policy-csit | QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | 17:06:43 policy-csit | ------------------------------------------------------------------------------ 17:06:43 policy-csit | QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | PASS | 17:06:43 policy-csit | ------------------------------------------------------------------------------ 17:06:43 policy-csit | QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | 17:06:43 policy-csit | ------------------------------------------------------------------------------ 17:06:43 policy-csit | DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | 17:06:43 policy-csit | ------------------------------------------------------------------------------ 17:06:43 policy-csit | DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | 17:06:43 policy-csit | ------------------------------------------------------------------------------ 17:06:43 policy-csit | QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | 17:06:43 policy-csit | ------------------------------------------------------------------------------ 17:06:43 policy-csit | Pap-Test & Pap-Slas.Pap-Test | PASS | 17:06:43 policy-csit | 22 tests, 22 passed, 0 failed 17:06:43 policy-csit | ============================================================================== 17:06:43 policy-csit | Pap-Test & Pap-Slas.Pap-Slas 17:06:43 policy-csit | ============================================================================== 17:06:43 policy-csit | WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | 17:06:43 policy-csit | ------------------------------------------------------------------------------ 17:06:43 policy-csit | ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | 17:06:43 policy-csit | ------------------------------------------------------------------------------ 17:06:43 policy-csit | ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | 17:06:43 policy-csit | ------------------------------------------------------------------------------ 17:06:43 policy-csit | ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | 17:06:43 policy-csit | ------------------------------------------------------------------------------ 17:06:43 policy-csit | ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | 17:06:43 policy-csit | ------------------------------------------------------------------------------ 17:06:43 policy-csit | ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | 17:06:43 policy-csit | ------------------------------------------------------------------------------ 17:06:43 policy-csit | ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | 17:06:43 policy-csit | ------------------------------------------------------------------------------ 17:06:43 policy-csit | ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | 17:06:43 policy-csit | ------------------------------------------------------------------------------ 17:06:43 policy-csit | Pap-Test & Pap-Slas.Pap-Slas | PASS | 17:06:43 policy-csit | 8 tests, 8 passed, 0 failed 17:06:43 policy-csit | ============================================================================== 17:06:43 policy-csit | Pap-Test & Pap-Slas | PASS | 17:06:43 policy-csit | 30 tests, 30 passed, 0 failed 17:06:43 policy-csit | ============================================================================== 17:06:43 policy-csit | Output: /tmp/results/output.xml 17:06:43 policy-csit | Log: /tmp/results/log.html 17:06:43 policy-csit | Report: /tmp/results/report.html 17:06:43 policy-csit | RESULT: 0 17:06:43 =================================== 17:06:43 ======== Logs from policy-db-migrator ======== 17:06:43 policy-db-migrator | Waiting for mariadb port 3306... 17:06:43 policy-db-migrator | nc: connect to mariadb (172.17.0.5) port 3306 (tcp) failed: Connection refused 17:06:43 policy-db-migrator | nc: connect to mariadb (172.17.0.5) port 3306 (tcp) failed: Connection refused 17:06:43 policy-db-migrator | nc: connect to mariadb (172.17.0.5) port 3306 (tcp) failed: Connection refused 17:06:43 policy-db-migrator | nc: connect to mariadb (172.17.0.5) port 3306 (tcp) failed: Connection refused 17:06:43 policy-db-migrator | nc: connect to mariadb (172.17.0.5) port 3306 (tcp) failed: Connection refused 17:06:43 policy-db-migrator | nc: connect to mariadb (172.17.0.5) port 3306 (tcp) failed: Connection refused 17:06:43 policy-db-migrator | Connection to mariadb (172.17.0.5) 3306 port [tcp/mysql] succeeded! 17:06:43 policy-db-migrator | 321 blocks 17:06:43 policy-db-migrator | Preparing upgrade release version: 0800 17:06:43 policy-db-migrator | Preparing upgrade release version: 0900 17:06:43 policy-db-migrator | Preparing upgrade release version: 1000 17:06:43 policy-db-migrator | Preparing upgrade release version: 1100 17:06:43 policy-db-migrator | Preparing upgrade release version: 1200 17:06:43 policy-db-migrator | Preparing upgrade release version: 1300 17:06:43 policy-db-migrator | Done 17:06:43 policy-db-migrator | name version 17:06:43 policy-db-migrator | policyadmin 0 17:06:43 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 17:06:43 policy-db-migrator | upgrade: 0 -> 1300 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0450-pdpgroup.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0460-pdppolicystatus.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0470-pdp.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0480-pdpstatistics.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0500-pdpsubgroup.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0570-toscadatatype.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0580-toscadatatypes.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0600-toscanodetemplate.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0610-toscanodetemplates.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0630-toscanodetype.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0640-toscanodetypes.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0660-toscaparameter.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0670-toscapolicies.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0690-toscapolicy.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0700-toscapolicytype.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0710-toscapolicytypes.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0730-toscaproperty.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0770-toscarequirement.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0780-toscarequirements.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0820-toscatrigger.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0100-pdp.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0130-pdpstatistics.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0150-pdpstatistics.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | UPDATE jpapdpstatistics_enginestats a 17:06:43 policy-db-migrator | JOIN pdpstatistics b 17:06:43 policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp 17:06:43 policy-db-migrator | SET a.id = b.id 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0210-sequence.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0220-sequence.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0120-toscatrigger.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | DROP TABLE IF EXISTS toscatrigger 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0140-toscaparameter.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | DROP TABLE IF EXISTS toscaparameter 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0150-toscaproperty.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | DROP TABLE IF EXISTS toscaproperty 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0100-upgrade.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | select 'upgrade to 1100 completed' as msg 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | msg 17:06:43 policy-db-migrator | upgrade to 1100 completed 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0120-audit_sequence.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0130-statistics_sequence.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | TRUNCATE TABLE sequence 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0100-pdpstatistics.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | DROP TABLE pdpstatistics 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | > upgrade 0120-statistics_sequence.sql 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | DROP TABLE statistics_sequence 17:06:43 policy-db-migrator | -------------- 17:06:43 policy-db-migrator | 17:06:43 policy-db-migrator | policyadmin: OK: upgrade (1300) 17:06:43 policy-db-migrator | name version 17:06:43 policy-db-migrator | policyadmin 1300 17:06:43 policy-db-migrator | ID script operation from_version to_version tag success atTime 17:06:43 policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:08 17:06:43 policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:08 17:06:43 policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:08 17:06:43 policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:08 17:06:43 policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:09 17:06:43 policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:09 17:06:43 policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:09 17:06:43 policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:09 17:06:43 policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:09 17:06:43 policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:09 17:06:43 policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:09 17:06:43 policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:09 17:06:43 policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:09 17:06:43 policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:09 17:06:43 policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:09 17:06:43 policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:09 17:06:43 policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:09 17:06:43 policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:09 17:06:43 policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:09 17:06:43 policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:09 17:06:43 policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:09 17:06:43 policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:09 17:06:43 policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:09 17:06:43 policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:09 17:06:43 policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:09 17:06:43 policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:09 17:06:43 policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:09 17:06:43 policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:09 17:06:43 policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:09 17:06:43 policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:09 17:06:43 policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:09 17:06:43 policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:10 17:06:43 policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:10 17:06:43 policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:10 17:06:43 policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:10 17:06:43 policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:10 17:06:43 policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:10 17:06:43 policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:10 17:06:43 policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:10 17:06:43 policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:10 17:06:43 policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:10 17:06:43 policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:10 17:06:43 policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:10 17:06:43 policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:10 17:06:43 policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:10 17:06:43 policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:10 17:06:43 policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:10 17:06:43 policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:10 17:06:43 policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:10 17:06:43 policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:10 17:06:43 policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:10 17:06:43 policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:10 17:06:43 policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:10 17:06:43 policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:10 17:06:43 policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:10 17:06:43 policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:11 17:06:43 policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:11 17:06:43 policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:11 17:06:43 policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:11 17:06:43 policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:11 17:06:43 policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:11 17:06:43 policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:11 17:06:43 policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:11 17:06:43 policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:11 17:06:43 policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:11 17:06:43 policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:11 17:06:43 policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:11 17:06:43 policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:11 17:06:43 policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:11 17:06:43 policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:11 17:06:43 policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:11 17:06:43 policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:11 17:06:43 policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:11 17:06:43 policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:11 17:06:43 policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:11 17:06:43 policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:11 17:06:43 policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:11 17:06:43 policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:11 17:06:43 policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:11 17:06:43 policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:12 17:06:43 policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:12 17:06:43 policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:12 17:06:43 policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:12 17:06:43 policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:12 17:06:43 policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:12 17:06:43 policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:12 17:06:43 policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:12 17:06:43 policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:12 17:06:43 policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:12 17:06:43 policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:12 17:06:43 policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:12 17:06:43 policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:12 17:06:43 policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:12 17:06:43 policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:12 17:06:43 policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:12 17:06:43 policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 1006251704080800u 1 2025-06-10 17:04:12 17:06:43 policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 1006251704080900u 1 2025-06-10 17:04:12 17:06:43 policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 1006251704080900u 1 2025-06-10 17:04:12 17:06:43 policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 1006251704080900u 1 2025-06-10 17:04:13 17:06:43 policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 1006251704080900u 1 2025-06-10 17:04:13 17:06:43 policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 1006251704080900u 1 2025-06-10 17:04:13 17:06:43 policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 1006251704080900u 1 2025-06-10 17:04:13 17:06:43 policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 1006251704080900u 1 2025-06-10 17:04:13 17:06:43 policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 1006251704080900u 1 2025-06-10 17:04:13 17:06:43 policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 1006251704080900u 1 2025-06-10 17:04:13 17:06:43 policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 1006251704080900u 1 2025-06-10 17:04:13 17:06:43 policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 1006251704080900u 1 2025-06-10 17:04:13 17:06:43 policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 1006251704080900u 1 2025-06-10 17:04:13 17:06:43 policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 1006251704080900u 1 2025-06-10 17:04:13 17:06:43 policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 1006251704081000u 1 2025-06-10 17:04:13 17:06:43 policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 1006251704081000u 1 2025-06-10 17:04:13 17:06:43 policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 1006251704081000u 1 2025-06-10 17:04:13 17:06:43 policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 1006251704081000u 1 2025-06-10 17:04:13 17:06:43 policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 1006251704081000u 1 2025-06-10 17:04:13 17:06:43 policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 1006251704081000u 1 2025-06-10 17:04:13 17:06:43 policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 1006251704081000u 1 2025-06-10 17:04:13 17:06:43 policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 1006251704081000u 1 2025-06-10 17:04:13 17:06:43 policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 1006251704081000u 1 2025-06-10 17:04:13 17:06:43 policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 1006251704081100u 1 2025-06-10 17:04:13 17:06:43 policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 1006251704081200u 1 2025-06-10 17:04:13 17:06:43 policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 1006251704081200u 1 2025-06-10 17:04:13 17:06:43 policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 1006251704081200u 1 2025-06-10 17:04:14 17:06:43 policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 1006251704081200u 1 2025-06-10 17:04:14 17:06:43 policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 1006251704081300u 1 2025-06-10 17:04:14 17:06:43 policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 1006251704081300u 1 2025-06-10 17:04:14 17:06:43 policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 1006251704081300u 1 2025-06-10 17:04:14 17:06:43 policy-db-migrator | policyadmin: OK @ 1300 17:06:43 =================================== 17:06:43 ======== Logs from pap ======== 17:06:43 policy-pap | Waiting for mariadb port 3306... 17:06:43 policy-pap | mariadb (172.17.0.5:3306) open 17:06:43 policy-pap | Waiting for kafka port 9092... 17:06:43 policy-pap | kafka (172.17.0.7:9092) open 17:06:43 policy-pap | Waiting for api port 6969... 17:06:43 policy-pap | api (172.17.0.9:6969) open 17:06:43 policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml 17:06:43 policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json 17:06:43 policy-pap | 17:06:43 policy-pap | . ____ _ __ _ _ 17:06:43 policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 17:06:43 policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 17:06:43 policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 17:06:43 policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / 17:06:43 policy-pap | =========|_|==============|___/=/_/_/_/ 17:06:43 policy-pap | :: Spring Boot :: (v3.1.10) 17:06:43 policy-pap | 17:06:43 policy-pap | [2025-06-10T17:04:29.380+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final 17:06:43 policy-pap | [2025-06-10T17:04:29.448+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.11 with PID 34 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) 17:06:43 policy-pap | [2025-06-10T17:04:29.449+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" 17:06:43 policy-pap | [2025-06-10T17:04:31.390+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 17:06:43 policy-pap | [2025-06-10T17:04:31.493+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 94 ms. Found 7 JPA repository interfaces. 17:06:43 policy-pap | [2025-06-10T17:04:31.979+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 17:06:43 policy-pap | [2025-06-10T17:04:31.979+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 17:06:43 policy-pap | [2025-06-10T17:04:32.568+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 17:06:43 policy-pap | [2025-06-10T17:04:32.578+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 17:06:43 policy-pap | [2025-06-10T17:04:32.580+00:00|INFO|StandardService|main] Starting service [Tomcat] 17:06:43 policy-pap | [2025-06-10T17:04:32.581+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.19] 17:06:43 policy-pap | [2025-06-10T17:04:32.677+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext 17:06:43 policy-pap | [2025-06-10T17:04:32.677+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3156 ms 17:06:43 policy-pap | [2025-06-10T17:04:33.091+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 17:06:43 policy-pap | [2025-06-10T17:04:33.145+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 5.6.15.Final 17:06:43 policy-pap | [2025-06-10T17:04:33.493+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 17:06:43 policy-pap | [2025-06-10T17:04:33.587+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@4ee5b2d9 17:06:43 policy-pap | [2025-06-10T17:04:33.589+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 17:06:43 policy-pap | [2025-06-10T17:04:33.618+00:00|INFO|Dialect|main] HHH000400: Using dialect: org.hibernate.dialect.MariaDB106Dialect 17:06:43 policy-pap | [2025-06-10T17:04:35.062+00:00|INFO|JtaPlatformInitiator|main] HHH000490: Using JtaPlatform implementation: [org.hibernate.engine.transaction.jta.platform.internal.NoJtaPlatform] 17:06:43 policy-pap | [2025-06-10T17:04:35.076+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 17:06:43 policy-pap | [2025-06-10T17:04:35.544+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository 17:06:43 policy-pap | [2025-06-10T17:04:36.000+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository 17:06:43 policy-pap | [2025-06-10T17:04:36.161+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository 17:06:43 policy-pap | [2025-06-10T17:04:36.500+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 17:06:43 policy-pap | allow.auto.create.topics = true 17:06:43 policy-pap | auto.commit.interval.ms = 5000 17:06:43 policy-pap | auto.include.jmx.reporter = true 17:06:43 policy-pap | auto.offset.reset = latest 17:06:43 policy-pap | bootstrap.servers = [kafka:9092] 17:06:43 policy-pap | check.crcs = true 17:06:43 policy-pap | client.dns.lookup = use_all_dns_ips 17:06:43 policy-pap | client.id = consumer-ad57df67-fea3-4cf4-a72d-c4d0b25956e5-1 17:06:43 policy-pap | client.rack = 17:06:43 policy-pap | connections.max.idle.ms = 540000 17:06:43 policy-pap | default.api.timeout.ms = 60000 17:06:43 policy-pap | enable.auto.commit = true 17:06:43 policy-pap | exclude.internal.topics = true 17:06:43 policy-pap | fetch.max.bytes = 52428800 17:06:43 policy-pap | fetch.max.wait.ms = 500 17:06:43 policy-pap | fetch.min.bytes = 1 17:06:43 policy-pap | group.id = ad57df67-fea3-4cf4-a72d-c4d0b25956e5 17:06:43 policy-pap | group.instance.id = null 17:06:43 policy-pap | heartbeat.interval.ms = 3000 17:06:43 policy-pap | interceptor.classes = [] 17:06:43 policy-pap | internal.leave.group.on.close = true 17:06:43 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 17:06:43 policy-pap | isolation.level = read_uncommitted 17:06:43 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:06:43 policy-pap | max.partition.fetch.bytes = 1048576 17:06:43 policy-pap | max.poll.interval.ms = 300000 17:06:43 policy-pap | max.poll.records = 500 17:06:43 policy-pap | metadata.max.age.ms = 300000 17:06:43 policy-pap | metric.reporters = [] 17:06:43 policy-pap | metrics.num.samples = 2 17:06:43 policy-pap | metrics.recording.level = INFO 17:06:43 policy-pap | metrics.sample.window.ms = 30000 17:06:43 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 17:06:43 policy-pap | receive.buffer.bytes = 65536 17:06:43 policy-pap | reconnect.backoff.max.ms = 1000 17:06:43 policy-pap | reconnect.backoff.ms = 50 17:06:43 policy-pap | request.timeout.ms = 30000 17:06:43 policy-pap | retry.backoff.ms = 100 17:06:43 policy-pap | sasl.client.callback.handler.class = null 17:06:43 policy-pap | sasl.jaas.config = null 17:06:43 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 17:06:43 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 17:06:43 policy-pap | sasl.kerberos.service.name = null 17:06:43 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 17:06:43 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 17:06:43 policy-pap | sasl.login.callback.handler.class = null 17:06:43 policy-pap | sasl.login.class = null 17:06:43 policy-pap | sasl.login.connect.timeout.ms = null 17:06:43 policy-pap | sasl.login.read.timeout.ms = null 17:06:43 policy-pap | sasl.login.refresh.buffer.seconds = 300 17:06:43 policy-pap | sasl.login.refresh.min.period.seconds = 60 17:06:43 policy-pap | sasl.login.refresh.window.factor = 0.8 17:06:43 policy-pap | sasl.login.refresh.window.jitter = 0.05 17:06:43 policy-pap | sasl.login.retry.backoff.max.ms = 10000 17:06:43 policy-pap | sasl.login.retry.backoff.ms = 100 17:06:43 policy-pap | sasl.mechanism = GSSAPI 17:06:43 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 17:06:43 policy-pap | sasl.oauthbearer.expected.audience = null 17:06:43 policy-pap | sasl.oauthbearer.expected.issuer = null 17:06:43 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 17:06:43 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 17:06:43 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 17:06:43 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 17:06:43 policy-pap | sasl.oauthbearer.scope.claim.name = scope 17:06:43 policy-pap | sasl.oauthbearer.sub.claim.name = sub 17:06:43 policy-pap | sasl.oauthbearer.token.endpoint.url = null 17:06:43 policy-pap | security.protocol = PLAINTEXT 17:06:43 policy-pap | security.providers = null 17:06:43 policy-pap | send.buffer.bytes = 131072 17:06:43 policy-pap | session.timeout.ms = 45000 17:06:43 policy-pap | socket.connection.setup.timeout.max.ms = 30000 17:06:43 policy-pap | socket.connection.setup.timeout.ms = 10000 17:06:43 policy-pap | ssl.cipher.suites = null 17:06:43 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 17:06:43 policy-pap | ssl.endpoint.identification.algorithm = https 17:06:43 policy-pap | ssl.engine.factory.class = null 17:06:43 policy-pap | ssl.key.password = null 17:06:43 policy-pap | ssl.keymanager.algorithm = SunX509 17:06:43 policy-pap | ssl.keystore.certificate.chain = null 17:06:43 policy-pap | ssl.keystore.key = null 17:06:43 policy-pap | ssl.keystore.location = null 17:06:43 policy-pap | ssl.keystore.password = null 17:06:43 policy-pap | ssl.keystore.type = JKS 17:06:43 policy-pap | ssl.protocol = TLSv1.3 17:06:43 policy-pap | ssl.provider = null 17:06:43 policy-pap | ssl.secure.random.implementation = null 17:06:43 policy-pap | ssl.trustmanager.algorithm = PKIX 17:06:43 policy-pap | ssl.truststore.certificates = null 17:06:43 policy-pap | ssl.truststore.location = null 17:06:43 policy-pap | ssl.truststore.password = null 17:06:43 policy-pap | ssl.truststore.type = JKS 17:06:43 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:06:43 policy-pap | 17:06:43 policy-pap | [2025-06-10T17:04:36.677+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 17:06:43 policy-pap | [2025-06-10T17:04:36.677+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 17:06:43 policy-pap | [2025-06-10T17:04:36.677+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749575076675 17:06:43 policy-pap | [2025-06-10T17:04:36.680+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-ad57df67-fea3-4cf4-a72d-c4d0b25956e5-1, groupId=ad57df67-fea3-4cf4-a72d-c4d0b25956e5] Subscribed to topic(s): policy-pdp-pap 17:06:43 policy-pap | [2025-06-10T17:04:36.681+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 17:06:43 policy-pap | allow.auto.create.topics = true 17:06:43 policy-pap | auto.commit.interval.ms = 5000 17:06:43 policy-pap | auto.include.jmx.reporter = true 17:06:43 policy-pap | auto.offset.reset = latest 17:06:43 policy-pap | bootstrap.servers = [kafka:9092] 17:06:43 policy-pap | check.crcs = true 17:06:43 policy-pap | client.dns.lookup = use_all_dns_ips 17:06:43 policy-pap | client.id = consumer-policy-pap-2 17:06:43 policy-pap | client.rack = 17:06:43 policy-pap | connections.max.idle.ms = 540000 17:06:43 policy-pap | default.api.timeout.ms = 60000 17:06:43 policy-pap | enable.auto.commit = true 17:06:43 policy-pap | exclude.internal.topics = true 17:06:43 policy-pap | fetch.max.bytes = 52428800 17:06:43 policy-pap | fetch.max.wait.ms = 500 17:06:43 policy-pap | fetch.min.bytes = 1 17:06:43 policy-pap | group.id = policy-pap 17:06:43 policy-pap | group.instance.id = null 17:06:43 policy-pap | heartbeat.interval.ms = 3000 17:06:43 policy-pap | interceptor.classes = [] 17:06:43 policy-pap | internal.leave.group.on.close = true 17:06:43 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 17:06:43 policy-pap | isolation.level = read_uncommitted 17:06:43 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:06:43 policy-pap | max.partition.fetch.bytes = 1048576 17:06:43 policy-pap | max.poll.interval.ms = 300000 17:06:43 policy-pap | max.poll.records = 500 17:06:43 policy-pap | metadata.max.age.ms = 300000 17:06:43 policy-pap | metric.reporters = [] 17:06:43 policy-pap | metrics.num.samples = 2 17:06:43 policy-pap | metrics.recording.level = INFO 17:06:43 policy-pap | metrics.sample.window.ms = 30000 17:06:43 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 17:06:43 policy-pap | receive.buffer.bytes = 65536 17:06:43 policy-pap | reconnect.backoff.max.ms = 1000 17:06:43 policy-pap | reconnect.backoff.ms = 50 17:06:43 policy-pap | request.timeout.ms = 30000 17:06:43 policy-pap | retry.backoff.ms = 100 17:06:43 policy-pap | sasl.client.callback.handler.class = null 17:06:43 policy-pap | sasl.jaas.config = null 17:06:43 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 17:06:43 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 17:06:43 policy-pap | sasl.kerberos.service.name = null 17:06:43 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 17:06:43 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 17:06:43 policy-pap | sasl.login.callback.handler.class = null 17:06:43 policy-pap | sasl.login.class = null 17:06:43 policy-pap | sasl.login.connect.timeout.ms = null 17:06:43 policy-pap | sasl.login.read.timeout.ms = null 17:06:43 policy-pap | sasl.login.refresh.buffer.seconds = 300 17:06:43 policy-pap | sasl.login.refresh.min.period.seconds = 60 17:06:43 policy-pap | sasl.login.refresh.window.factor = 0.8 17:06:43 policy-pap | sasl.login.refresh.window.jitter = 0.05 17:06:43 policy-pap | sasl.login.retry.backoff.max.ms = 10000 17:06:43 policy-pap | sasl.login.retry.backoff.ms = 100 17:06:43 policy-pap | sasl.mechanism = GSSAPI 17:06:43 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 17:06:43 policy-pap | sasl.oauthbearer.expected.audience = null 17:06:43 policy-pap | sasl.oauthbearer.expected.issuer = null 17:06:43 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 17:06:43 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 17:06:43 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 17:06:43 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 17:06:43 policy-pap | sasl.oauthbearer.scope.claim.name = scope 17:06:43 policy-pap | sasl.oauthbearer.sub.claim.name = sub 17:06:43 policy-pap | sasl.oauthbearer.token.endpoint.url = null 17:06:43 policy-pap | security.protocol = PLAINTEXT 17:06:43 policy-pap | security.providers = null 17:06:43 policy-pap | send.buffer.bytes = 131072 17:06:43 policy-pap | session.timeout.ms = 45000 17:06:43 policy-pap | socket.connection.setup.timeout.max.ms = 30000 17:06:43 policy-pap | socket.connection.setup.timeout.ms = 10000 17:06:43 policy-pap | ssl.cipher.suites = null 17:06:43 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 17:06:43 policy-pap | ssl.endpoint.identification.algorithm = https 17:06:43 policy-pap | ssl.engine.factory.class = null 17:06:43 policy-pap | ssl.key.password = null 17:06:43 policy-pap | ssl.keymanager.algorithm = SunX509 17:06:43 policy-pap | ssl.keystore.certificate.chain = null 17:06:43 policy-pap | ssl.keystore.key = null 17:06:43 policy-pap | ssl.keystore.location = null 17:06:43 policy-pap | ssl.keystore.password = null 17:06:43 policy-pap | ssl.keystore.type = JKS 17:06:43 policy-pap | ssl.protocol = TLSv1.3 17:06:43 policy-pap | ssl.provider = null 17:06:43 policy-pap | ssl.secure.random.implementation = null 17:06:43 policy-pap | ssl.trustmanager.algorithm = PKIX 17:06:43 policy-pap | ssl.truststore.certificates = null 17:06:43 policy-pap | ssl.truststore.location = null 17:06:43 policy-pap | ssl.truststore.password = null 17:06:43 policy-pap | ssl.truststore.type = JKS 17:06:43 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:06:43 policy-pap | 17:06:43 policy-pap | [2025-06-10T17:04:36.686+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 17:06:43 policy-pap | [2025-06-10T17:04:36.686+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 17:06:43 policy-pap | [2025-06-10T17:04:36.686+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749575076686 17:06:43 policy-pap | [2025-06-10T17:04:36.687+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 17:06:43 policy-pap | [2025-06-10T17:04:37.027+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json 17:06:43 policy-pap | [2025-06-10T17:04:37.173+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 17:06:43 policy-pap | [2025-06-10T17:04:37.391+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@6c851821, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@4c0930c4, org.springframework.security.web.context.SecurityContextHolderFilter@70aa03c0, org.springframework.security.web.header.HeaderWriterFilter@5ced0537, org.springframework.security.web.authentication.logout.LogoutFilter@5e34a84b, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@5308e79d, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@2435c6ae, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@77db231c, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@75c0cd39, org.springframework.security.web.access.ExceptionTranslationFilter@23d23d98, org.springframework.security.web.access.intercept.AuthorizationFilter@35744f8] 17:06:43 policy-pap | [2025-06-10T17:04:38.157+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 17:06:43 policy-pap | [2025-06-10T17:04:38.260+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 17:06:43 policy-pap | [2025-06-10T17:04:38.280+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' 17:06:43 policy-pap | [2025-06-10T17:04:38.301+00:00|INFO|ServiceManager|main] Policy PAP starting 17:06:43 policy-pap | [2025-06-10T17:04:38.301+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry 17:06:43 policy-pap | [2025-06-10T17:04:38.302+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters 17:06:43 policy-pap | [2025-06-10T17:04:38.303+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener 17:06:43 policy-pap | [2025-06-10T17:04:38.303+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher 17:06:43 policy-pap | [2025-06-10T17:04:38.304+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher 17:06:43 policy-pap | [2025-06-10T17:04:38.304+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher 17:06:43 policy-pap | [2025-06-10T17:04:38.306+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=ad57df67-fea3-4cf4-a72d-c4d0b25956e5, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@4777f71e 17:06:43 policy-pap | [2025-06-10T17:04:38.322+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=ad57df67-fea3-4cf4-a72d-c4d0b25956e5, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 17:06:43 policy-pap | [2025-06-10T17:04:38.323+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 17:06:43 policy-pap | allow.auto.create.topics = true 17:06:43 policy-pap | auto.commit.interval.ms = 5000 17:06:43 policy-pap | auto.include.jmx.reporter = true 17:06:43 policy-pap | auto.offset.reset = latest 17:06:43 policy-pap | bootstrap.servers = [kafka:9092] 17:06:43 policy-pap | check.crcs = true 17:06:43 policy-pap | client.dns.lookup = use_all_dns_ips 17:06:43 policy-pap | client.id = consumer-ad57df67-fea3-4cf4-a72d-c4d0b25956e5-3 17:06:43 policy-pap | client.rack = 17:06:43 policy-pap | connections.max.idle.ms = 540000 17:06:43 policy-pap | default.api.timeout.ms = 60000 17:06:43 policy-pap | enable.auto.commit = true 17:06:43 policy-pap | exclude.internal.topics = true 17:06:43 policy-pap | fetch.max.bytes = 52428800 17:06:43 policy-pap | fetch.max.wait.ms = 500 17:06:43 policy-pap | fetch.min.bytes = 1 17:06:43 policy-pap | group.id = ad57df67-fea3-4cf4-a72d-c4d0b25956e5 17:06:43 policy-pap | group.instance.id = null 17:06:43 policy-pap | heartbeat.interval.ms = 3000 17:06:43 policy-pap | interceptor.classes = [] 17:06:43 policy-pap | internal.leave.group.on.close = true 17:06:43 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 17:06:43 policy-pap | isolation.level = read_uncommitted 17:06:43 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:06:43 policy-pap | max.partition.fetch.bytes = 1048576 17:06:43 policy-pap | max.poll.interval.ms = 300000 17:06:43 policy-pap | max.poll.records = 500 17:06:43 policy-pap | metadata.max.age.ms = 300000 17:06:43 policy-pap | metric.reporters = [] 17:06:43 policy-pap | metrics.num.samples = 2 17:06:43 policy-pap | metrics.recording.level = INFO 17:06:43 policy-pap | metrics.sample.window.ms = 30000 17:06:43 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 17:06:43 policy-pap | receive.buffer.bytes = 65536 17:06:43 policy-pap | reconnect.backoff.max.ms = 1000 17:06:43 policy-pap | reconnect.backoff.ms = 50 17:06:43 policy-pap | request.timeout.ms = 30000 17:06:43 policy-pap | retry.backoff.ms = 100 17:06:43 policy-pap | sasl.client.callback.handler.class = null 17:06:43 policy-pap | sasl.jaas.config = null 17:06:43 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 17:06:43 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 17:06:43 policy-pap | sasl.kerberos.service.name = null 17:06:43 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 17:06:43 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 17:06:43 policy-pap | sasl.login.callback.handler.class = null 17:06:43 policy-pap | sasl.login.class = null 17:06:43 policy-pap | sasl.login.connect.timeout.ms = null 17:06:43 policy-pap | sasl.login.read.timeout.ms = null 17:06:43 policy-pap | sasl.login.refresh.buffer.seconds = 300 17:06:43 policy-pap | sasl.login.refresh.min.period.seconds = 60 17:06:43 policy-pap | sasl.login.refresh.window.factor = 0.8 17:06:43 policy-pap | sasl.login.refresh.window.jitter = 0.05 17:06:43 policy-pap | sasl.login.retry.backoff.max.ms = 10000 17:06:43 policy-pap | sasl.login.retry.backoff.ms = 100 17:06:43 policy-pap | sasl.mechanism = GSSAPI 17:06:43 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 17:06:43 policy-pap | sasl.oauthbearer.expected.audience = null 17:06:43 policy-pap | sasl.oauthbearer.expected.issuer = null 17:06:43 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 17:06:43 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 17:06:43 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 17:06:43 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 17:06:43 policy-pap | sasl.oauthbearer.scope.claim.name = scope 17:06:43 policy-pap | sasl.oauthbearer.sub.claim.name = sub 17:06:43 policy-pap | sasl.oauthbearer.token.endpoint.url = null 17:06:43 policy-pap | security.protocol = PLAINTEXT 17:06:43 policy-pap | security.providers = null 17:06:43 policy-pap | send.buffer.bytes = 131072 17:06:43 policy-pap | session.timeout.ms = 45000 17:06:43 policy-pap | socket.connection.setup.timeout.max.ms = 30000 17:06:43 policy-pap | socket.connection.setup.timeout.ms = 10000 17:06:43 policy-pap | ssl.cipher.suites = null 17:06:43 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 17:06:43 policy-pap | ssl.endpoint.identification.algorithm = https 17:06:43 policy-pap | ssl.engine.factory.class = null 17:06:43 policy-pap | ssl.key.password = null 17:06:43 policy-pap | ssl.keymanager.algorithm = SunX509 17:06:43 policy-pap | ssl.keystore.certificate.chain = null 17:06:43 policy-pap | ssl.keystore.key = null 17:06:43 policy-pap | ssl.keystore.location = null 17:06:43 policy-pap | ssl.keystore.password = null 17:06:43 policy-pap | ssl.keystore.type = JKS 17:06:43 policy-pap | ssl.protocol = TLSv1.3 17:06:43 policy-pap | ssl.provider = null 17:06:43 policy-pap | ssl.secure.random.implementation = null 17:06:43 policy-pap | ssl.trustmanager.algorithm = PKIX 17:06:43 policy-pap | ssl.truststore.certificates = null 17:06:43 policy-pap | ssl.truststore.location = null 17:06:43 policy-pap | ssl.truststore.password = null 17:06:43 policy-pap | ssl.truststore.type = JKS 17:06:43 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:06:43 policy-pap | 17:06:43 policy-pap | [2025-06-10T17:04:38.331+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 17:06:43 policy-pap | [2025-06-10T17:04:38.332+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 17:06:43 policy-pap | [2025-06-10T17:04:38.332+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749575078331 17:06:43 policy-pap | [2025-06-10T17:04:38.332+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-ad57df67-fea3-4cf4-a72d-c4d0b25956e5-3, groupId=ad57df67-fea3-4cf4-a72d-c4d0b25956e5] Subscribed to topic(s): policy-pdp-pap 17:06:43 policy-pap | [2025-06-10T17:04:38.336+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher 17:06:43 policy-pap | [2025-06-10T17:04:38.336+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=e4b28011-b013-4b98-a3fc-902c97b4067e, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@5f877009 17:06:43 policy-pap | [2025-06-10T17:04:38.336+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=e4b28011-b013-4b98-a3fc-902c97b4067e, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 17:06:43 policy-pap | [2025-06-10T17:04:38.336+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 17:06:43 policy-pap | allow.auto.create.topics = true 17:06:43 policy-pap | auto.commit.interval.ms = 5000 17:06:43 policy-pap | auto.include.jmx.reporter = true 17:06:43 policy-pap | auto.offset.reset = latest 17:06:43 policy-pap | bootstrap.servers = [kafka:9092] 17:06:43 policy-pap | check.crcs = true 17:06:43 policy-pap | client.dns.lookup = use_all_dns_ips 17:06:43 policy-pap | client.id = consumer-policy-pap-4 17:06:43 policy-pap | client.rack = 17:06:43 policy-pap | connections.max.idle.ms = 540000 17:06:43 policy-pap | default.api.timeout.ms = 60000 17:06:43 policy-pap | enable.auto.commit = true 17:06:43 policy-pap | exclude.internal.topics = true 17:06:43 policy-pap | fetch.max.bytes = 52428800 17:06:43 policy-pap | fetch.max.wait.ms = 500 17:06:43 policy-pap | fetch.min.bytes = 1 17:06:43 policy-pap | group.id = policy-pap 17:06:43 policy-pap | group.instance.id = null 17:06:43 policy-pap | heartbeat.interval.ms = 3000 17:06:43 policy-pap | interceptor.classes = [] 17:06:43 policy-pap | internal.leave.group.on.close = true 17:06:43 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 17:06:43 policy-pap | isolation.level = read_uncommitted 17:06:43 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:06:43 policy-pap | max.partition.fetch.bytes = 1048576 17:06:43 policy-pap | max.poll.interval.ms = 300000 17:06:43 policy-pap | max.poll.records = 500 17:06:43 policy-pap | metadata.max.age.ms = 300000 17:06:43 policy-pap | metric.reporters = [] 17:06:43 policy-pap | metrics.num.samples = 2 17:06:43 policy-pap | metrics.recording.level = INFO 17:06:43 policy-pap | metrics.sample.window.ms = 30000 17:06:43 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 17:06:43 policy-pap | receive.buffer.bytes = 65536 17:06:43 policy-pap | reconnect.backoff.max.ms = 1000 17:06:43 policy-pap | reconnect.backoff.ms = 50 17:06:43 policy-pap | request.timeout.ms = 30000 17:06:43 policy-pap | retry.backoff.ms = 100 17:06:43 policy-pap | sasl.client.callback.handler.class = null 17:06:43 policy-pap | sasl.jaas.config = null 17:06:43 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 17:06:43 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 17:06:43 policy-pap | sasl.kerberos.service.name = null 17:06:43 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 17:06:43 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 17:06:43 policy-pap | sasl.login.callback.handler.class = null 17:06:43 policy-pap | sasl.login.class = null 17:06:43 policy-pap | sasl.login.connect.timeout.ms = null 17:06:43 policy-pap | sasl.login.read.timeout.ms = null 17:06:43 policy-pap | sasl.login.refresh.buffer.seconds = 300 17:06:43 policy-pap | sasl.login.refresh.min.period.seconds = 60 17:06:43 policy-pap | sasl.login.refresh.window.factor = 0.8 17:06:43 policy-pap | sasl.login.refresh.window.jitter = 0.05 17:06:43 policy-pap | sasl.login.retry.backoff.max.ms = 10000 17:06:43 policy-pap | sasl.login.retry.backoff.ms = 100 17:06:43 policy-pap | sasl.mechanism = GSSAPI 17:06:43 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 17:06:43 policy-pap | sasl.oauthbearer.expected.audience = null 17:06:43 policy-pap | sasl.oauthbearer.expected.issuer = null 17:06:43 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 17:06:43 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 17:06:43 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 17:06:43 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 17:06:43 policy-pap | sasl.oauthbearer.scope.claim.name = scope 17:06:43 policy-pap | sasl.oauthbearer.sub.claim.name = sub 17:06:43 policy-pap | sasl.oauthbearer.token.endpoint.url = null 17:06:43 policy-pap | security.protocol = PLAINTEXT 17:06:43 policy-pap | security.providers = null 17:06:43 policy-pap | send.buffer.bytes = 131072 17:06:43 policy-pap | session.timeout.ms = 45000 17:06:43 policy-pap | socket.connection.setup.timeout.max.ms = 30000 17:06:43 policy-pap | socket.connection.setup.timeout.ms = 10000 17:06:43 policy-pap | ssl.cipher.suites = null 17:06:43 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 17:06:43 policy-pap | ssl.endpoint.identification.algorithm = https 17:06:43 policy-pap | ssl.engine.factory.class = null 17:06:43 policy-pap | ssl.key.password = null 17:06:43 policy-pap | ssl.keymanager.algorithm = SunX509 17:06:43 policy-pap | ssl.keystore.certificate.chain = null 17:06:43 policy-pap | ssl.keystore.key = null 17:06:43 policy-pap | ssl.keystore.location = null 17:06:43 policy-pap | ssl.keystore.password = null 17:06:43 policy-pap | ssl.keystore.type = JKS 17:06:43 policy-pap | ssl.protocol = TLSv1.3 17:06:43 policy-pap | ssl.provider = null 17:06:43 policy-pap | ssl.secure.random.implementation = null 17:06:43 policy-pap | ssl.trustmanager.algorithm = PKIX 17:06:43 policy-pap | ssl.truststore.certificates = null 17:06:43 policy-pap | ssl.truststore.location = null 17:06:43 policy-pap | ssl.truststore.password = null 17:06:43 policy-pap | ssl.truststore.type = JKS 17:06:43 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 17:06:43 policy-pap | 17:06:43 policy-pap | [2025-06-10T17:04:38.342+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 17:06:43 policy-pap | [2025-06-10T17:04:38.342+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 17:06:43 policy-pap | [2025-06-10T17:04:38.342+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749575078342 17:06:43 policy-pap | [2025-06-10T17:04:38.343+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 17:06:43 policy-pap | [2025-06-10T17:04:38.344+00:00|INFO|ServiceManager|main] Policy PAP starting topics 17:06:43 policy-pap | [2025-06-10T17:04:38.344+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=e4b28011-b013-4b98-a3fc-902c97b4067e, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 17:06:43 policy-pap | [2025-06-10T17:04:38.346+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=ad57df67-fea3-4cf4-a72d-c4d0b25956e5, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 17:06:43 policy-pap | [2025-06-10T17:04:38.346+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=4d204999-4200-42bc-a76f-60f99fe6d91b, alive=false, publisher=null]]: starting 17:06:43 policy-pap | [2025-06-10T17:04:38.370+00:00|INFO|ProducerConfig|main] ProducerConfig values: 17:06:43 policy-pap | acks = -1 17:06:43 policy-pap | auto.include.jmx.reporter = true 17:06:43 policy-pap | batch.size = 16384 17:06:43 policy-pap | bootstrap.servers = [kafka:9092] 17:06:43 policy-pap | buffer.memory = 33554432 17:06:43 policy-pap | client.dns.lookup = use_all_dns_ips 17:06:43 policy-pap | client.id = producer-1 17:06:43 policy-pap | compression.type = none 17:06:43 policy-pap | connections.max.idle.ms = 540000 17:06:43 policy-pap | delivery.timeout.ms = 120000 17:06:43 policy-pap | enable.idempotence = true 17:06:43 policy-pap | interceptor.classes = [] 17:06:43 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 17:06:43 policy-pap | linger.ms = 0 17:06:43 policy-pap | max.block.ms = 60000 17:06:43 policy-pap | max.in.flight.requests.per.connection = 5 17:06:43 policy-pap | max.request.size = 1048576 17:06:43 policy-pap | metadata.max.age.ms = 300000 17:06:43 policy-pap | metadata.max.idle.ms = 300000 17:06:43 policy-pap | metric.reporters = [] 17:06:43 policy-pap | metrics.num.samples = 2 17:06:43 policy-pap | metrics.recording.level = INFO 17:06:43 policy-pap | metrics.sample.window.ms = 30000 17:06:43 policy-pap | partitioner.adaptive.partitioning.enable = true 17:06:43 policy-pap | partitioner.availability.timeout.ms = 0 17:06:43 policy-pap | partitioner.class = null 17:06:43 policy-pap | partitioner.ignore.keys = false 17:06:43 policy-pap | receive.buffer.bytes = 32768 17:06:43 policy-pap | reconnect.backoff.max.ms = 1000 17:06:43 policy-pap | reconnect.backoff.ms = 50 17:06:43 policy-pap | request.timeout.ms = 30000 17:06:43 policy-pap | retries = 2147483647 17:06:43 policy-pap | retry.backoff.ms = 100 17:06:43 policy-pap | sasl.client.callback.handler.class = null 17:06:43 policy-pap | sasl.jaas.config = null 17:06:43 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 17:06:43 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 17:06:43 policy-pap | sasl.kerberos.service.name = null 17:06:43 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 17:06:43 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 17:06:43 policy-pap | sasl.login.callback.handler.class = null 17:06:43 policy-pap | sasl.login.class = null 17:06:43 policy-pap | sasl.login.connect.timeout.ms = null 17:06:43 policy-pap | sasl.login.read.timeout.ms = null 17:06:43 policy-pap | sasl.login.refresh.buffer.seconds = 300 17:06:43 policy-pap | sasl.login.refresh.min.period.seconds = 60 17:06:43 policy-pap | sasl.login.refresh.window.factor = 0.8 17:06:43 policy-pap | sasl.login.refresh.window.jitter = 0.05 17:06:43 policy-pap | sasl.login.retry.backoff.max.ms = 10000 17:06:43 policy-pap | sasl.login.retry.backoff.ms = 100 17:06:43 policy-pap | sasl.mechanism = GSSAPI 17:06:43 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 17:06:43 policy-pap | sasl.oauthbearer.expected.audience = null 17:06:43 policy-pap | sasl.oauthbearer.expected.issuer = null 17:06:43 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 17:06:43 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 17:06:43 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 17:06:43 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 17:06:43 policy-pap | sasl.oauthbearer.scope.claim.name = scope 17:06:43 policy-pap | sasl.oauthbearer.sub.claim.name = sub 17:06:43 policy-pap | sasl.oauthbearer.token.endpoint.url = null 17:06:43 policy-pap | security.protocol = PLAINTEXT 17:06:43 policy-pap | security.providers = null 17:06:43 policy-pap | send.buffer.bytes = 131072 17:06:43 policy-pap | socket.connection.setup.timeout.max.ms = 30000 17:06:43 policy-pap | socket.connection.setup.timeout.ms = 10000 17:06:43 policy-pap | ssl.cipher.suites = null 17:06:43 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 17:06:43 policy-pap | ssl.endpoint.identification.algorithm = https 17:06:43 policy-pap | ssl.engine.factory.class = null 17:06:43 policy-pap | ssl.key.password = null 17:06:43 policy-pap | ssl.keymanager.algorithm = SunX509 17:06:43 policy-pap | ssl.keystore.certificate.chain = null 17:06:43 policy-pap | ssl.keystore.key = null 17:06:43 policy-pap | ssl.keystore.location = null 17:06:43 policy-pap | ssl.keystore.password = null 17:06:43 policy-pap | ssl.keystore.type = JKS 17:06:43 policy-pap | ssl.protocol = TLSv1.3 17:06:43 policy-pap | ssl.provider = null 17:06:43 policy-pap | ssl.secure.random.implementation = null 17:06:43 policy-pap | ssl.trustmanager.algorithm = PKIX 17:06:43 policy-pap | ssl.truststore.certificates = null 17:06:43 policy-pap | ssl.truststore.location = null 17:06:43 policy-pap | ssl.truststore.password = null 17:06:43 policy-pap | ssl.truststore.type = JKS 17:06:43 policy-pap | transaction.timeout.ms = 60000 17:06:43 policy-pap | transactional.id = null 17:06:43 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 17:06:43 policy-pap | 17:06:43 policy-pap | [2025-06-10T17:04:38.385+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 17:06:43 policy-pap | [2025-06-10T17:04:38.408+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 17:06:43 policy-pap | [2025-06-10T17:04:38.408+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 17:06:43 policy-pap | [2025-06-10T17:04:38.408+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749575078408 17:06:43 policy-pap | [2025-06-10T17:04:38.409+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=4d204999-4200-42bc-a76f-60f99fe6d91b, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 17:06:43 policy-pap | [2025-06-10T17:04:38.409+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=4d23ece3-9a8c-4812-a4b3-2dbdbcd6bd2d, alive=false, publisher=null]]: starting 17:06:43 policy-pap | [2025-06-10T17:04:38.410+00:00|INFO|ProducerConfig|main] ProducerConfig values: 17:06:43 policy-pap | acks = -1 17:06:43 policy-pap | auto.include.jmx.reporter = true 17:06:43 policy-pap | batch.size = 16384 17:06:43 policy-pap | bootstrap.servers = [kafka:9092] 17:06:43 policy-pap | buffer.memory = 33554432 17:06:43 policy-pap | client.dns.lookup = use_all_dns_ips 17:06:43 policy-pap | client.id = producer-2 17:06:43 policy-pap | compression.type = none 17:06:43 policy-pap | connections.max.idle.ms = 540000 17:06:43 policy-pap | delivery.timeout.ms = 120000 17:06:43 policy-pap | enable.idempotence = true 17:06:43 policy-pap | interceptor.classes = [] 17:06:43 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 17:06:43 policy-pap | linger.ms = 0 17:06:43 policy-pap | max.block.ms = 60000 17:06:43 policy-pap | max.in.flight.requests.per.connection = 5 17:06:43 policy-pap | max.request.size = 1048576 17:06:43 policy-pap | metadata.max.age.ms = 300000 17:06:43 policy-pap | metadata.max.idle.ms = 300000 17:06:43 policy-pap | metric.reporters = [] 17:06:43 policy-pap | metrics.num.samples = 2 17:06:43 policy-pap | metrics.recording.level = INFO 17:06:43 policy-pap | metrics.sample.window.ms = 30000 17:06:43 policy-pap | partitioner.adaptive.partitioning.enable = true 17:06:43 policy-pap | partitioner.availability.timeout.ms = 0 17:06:43 policy-pap | partitioner.class = null 17:06:43 policy-pap | partitioner.ignore.keys = false 17:06:43 policy-pap | receive.buffer.bytes = 32768 17:06:43 policy-pap | reconnect.backoff.max.ms = 1000 17:06:43 policy-pap | reconnect.backoff.ms = 50 17:06:43 policy-pap | request.timeout.ms = 30000 17:06:43 policy-pap | retries = 2147483647 17:06:43 policy-pap | retry.backoff.ms = 100 17:06:43 policy-pap | sasl.client.callback.handler.class = null 17:06:43 policy-pap | sasl.jaas.config = null 17:06:43 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 17:06:43 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 17:06:43 policy-pap | sasl.kerberos.service.name = null 17:06:43 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 17:06:43 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 17:06:43 policy-pap | sasl.login.callback.handler.class = null 17:06:43 policy-pap | sasl.login.class = null 17:06:43 policy-pap | sasl.login.connect.timeout.ms = null 17:06:43 policy-pap | sasl.login.read.timeout.ms = null 17:06:43 policy-pap | sasl.login.refresh.buffer.seconds = 300 17:06:43 policy-pap | sasl.login.refresh.min.period.seconds = 60 17:06:43 policy-pap | sasl.login.refresh.window.factor = 0.8 17:06:43 policy-pap | sasl.login.refresh.window.jitter = 0.05 17:06:43 policy-pap | sasl.login.retry.backoff.max.ms = 10000 17:06:43 policy-pap | sasl.login.retry.backoff.ms = 100 17:06:43 policy-pap | sasl.mechanism = GSSAPI 17:06:43 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 17:06:43 policy-pap | sasl.oauthbearer.expected.audience = null 17:06:43 policy-pap | sasl.oauthbearer.expected.issuer = null 17:06:43 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 17:06:43 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 17:06:43 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 17:06:43 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 17:06:43 policy-pap | sasl.oauthbearer.scope.claim.name = scope 17:06:43 policy-pap | sasl.oauthbearer.sub.claim.name = sub 17:06:43 policy-pap | sasl.oauthbearer.token.endpoint.url = null 17:06:43 policy-pap | security.protocol = PLAINTEXT 17:06:43 policy-pap | security.providers = null 17:06:43 policy-pap | send.buffer.bytes = 131072 17:06:43 policy-pap | socket.connection.setup.timeout.max.ms = 30000 17:06:43 policy-pap | socket.connection.setup.timeout.ms = 10000 17:06:43 policy-pap | ssl.cipher.suites = null 17:06:43 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 17:06:43 policy-pap | ssl.endpoint.identification.algorithm = https 17:06:43 policy-pap | ssl.engine.factory.class = null 17:06:43 policy-pap | ssl.key.password = null 17:06:43 policy-pap | ssl.keymanager.algorithm = SunX509 17:06:43 policy-pap | ssl.keystore.certificate.chain = null 17:06:43 policy-pap | ssl.keystore.key = null 17:06:43 policy-pap | ssl.keystore.location = null 17:06:43 policy-pap | ssl.keystore.password = null 17:06:43 policy-pap | ssl.keystore.type = JKS 17:06:43 policy-pap | ssl.protocol = TLSv1.3 17:06:43 policy-pap | ssl.provider = null 17:06:43 policy-pap | ssl.secure.random.implementation = null 17:06:43 policy-pap | ssl.trustmanager.algorithm = PKIX 17:06:43 policy-pap | ssl.truststore.certificates = null 17:06:43 policy-pap | ssl.truststore.location = null 17:06:43 policy-pap | ssl.truststore.password = null 17:06:43 policy-pap | ssl.truststore.type = JKS 17:06:43 policy-pap | transaction.timeout.ms = 60000 17:06:43 policy-pap | transactional.id = null 17:06:43 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 17:06:43 policy-pap | 17:06:43 policy-pap | [2025-06-10T17:04:38.411+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. 17:06:43 policy-pap | [2025-06-10T17:04:38.420+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 17:06:43 policy-pap | [2025-06-10T17:04:38.420+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 17:06:43 policy-pap | [2025-06-10T17:04:38.420+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749575078420 17:06:43 policy-pap | [2025-06-10T17:04:38.420+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=4d23ece3-9a8c-4812-a4b3-2dbdbcd6bd2d, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 17:06:43 policy-pap | [2025-06-10T17:04:38.420+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator 17:06:43 policy-pap | [2025-06-10T17:04:38.420+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher 17:06:43 policy-pap | [2025-06-10T17:04:38.423+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher 17:06:43 policy-pap | [2025-06-10T17:04:38.428+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers 17:06:43 policy-pap | [2025-06-10T17:04:38.431+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers 17:06:43 policy-pap | [2025-06-10T17:04:38.432+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock 17:06:43 policy-pap | [2025-06-10T17:04:38.432+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests 17:06:43 policy-pap | [2025-06-10T17:04:38.433+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer 17:06:43 policy-pap | [2025-06-10T17:04:38.432+00:00|INFO|TimerManager|Thread-9] timer manager update started 17:06:43 policy-pap | [2025-06-10T17:04:38.434+00:00|INFO|TimerManager|Thread-10] timer manager state-change started 17:06:43 policy-pap | [2025-06-10T17:04:38.435+00:00|INFO|ServiceManager|main] Policy PAP started 17:06:43 policy-pap | [2025-06-10T17:04:38.436+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 9.77 seconds (process running for 10.406) 17:06:43 policy-pap | [2025-06-10T17:04:38.842+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: oo1sME1NQySYCt7KlFcuVQ 17:06:43 policy-pap | [2025-06-10T17:04:38.843+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 17:06:43 policy-pap | [2025-06-10T17:04:38.843+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: oo1sME1NQySYCt7KlFcuVQ 17:06:43 policy-pap | [2025-06-10T17:04:38.844+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: oo1sME1NQySYCt7KlFcuVQ 17:06:43 policy-pap | [2025-06-10T17:04:38.900+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ad57df67-fea3-4cf4-a72d-c4d0b25956e5-3, groupId=ad57df67-fea3-4cf4-a72d-c4d0b25956e5] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 17:06:43 policy-pap | [2025-06-10T17:04:38.901+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ad57df67-fea3-4cf4-a72d-c4d0b25956e5-3, groupId=ad57df67-fea3-4cf4-a72d-c4d0b25956e5] Cluster ID: oo1sME1NQySYCt7KlFcuVQ 17:06:43 policy-pap | [2025-06-10T17:04:38.945+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 17:06:43 policy-pap | [2025-06-10T17:04:38.954+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 1 with epoch 0 17:06:43 policy-pap | [2025-06-10T17:04:38.955+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 0 with epoch 0 17:06:43 policy-pap | [2025-06-10T17:04:39.024+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ad57df67-fea3-4cf4-a72d-c4d0b25956e5-3, groupId=ad57df67-fea3-4cf4-a72d-c4d0b25956e5] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 17:06:43 policy-pap | [2025-06-10T17:04:39.077+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 17:06:43 policy-pap | [2025-06-10T17:04:39.137+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ad57df67-fea3-4cf4-a72d-c4d0b25956e5-3, groupId=ad57df67-fea3-4cf4-a72d-c4d0b25956e5] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 17:06:43 policy-pap | [2025-06-10T17:04:39.190+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 17:06:43 policy-pap | [2025-06-10T17:04:39.252+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ad57df67-fea3-4cf4-a72d-c4d0b25956e5-3, groupId=ad57df67-fea3-4cf4-a72d-c4d0b25956e5] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 17:06:43 policy-pap | [2025-06-10T17:04:39.296+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 17:06:43 policy-pap | [2025-06-10T17:04:39.366+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ad57df67-fea3-4cf4-a72d-c4d0b25956e5-3, groupId=ad57df67-fea3-4cf4-a72d-c4d0b25956e5] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 17:06:43 policy-pap | [2025-06-10T17:04:39.403+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 17:06:43 policy-pap | [2025-06-10T17:04:39.473+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ad57df67-fea3-4cf4-a72d-c4d0b25956e5-3, groupId=ad57df67-fea3-4cf4-a72d-c4d0b25956e5] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 17:06:43 policy-pap | [2025-06-10T17:04:39.511+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 17:06:43 policy-pap | [2025-06-10T17:04:39.581+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ad57df67-fea3-4cf4-a72d-c4d0b25956e5-3, groupId=ad57df67-fea3-4cf4-a72d-c4d0b25956e5] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 17:06:43 policy-pap | [2025-06-10T17:04:39.616+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 17:06:43 policy-pap | [2025-06-10T17:04:39.686+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ad57df67-fea3-4cf4-a72d-c4d0b25956e5-3, groupId=ad57df67-fea3-4cf4-a72d-c4d0b25956e5] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 17:06:43 policy-pap | [2025-06-10T17:04:39.724+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 17:06:43 policy-pap | [2025-06-10T17:04:39.795+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ad57df67-fea3-4cf4-a72d-c4d0b25956e5-3, groupId=ad57df67-fea3-4cf4-a72d-c4d0b25956e5] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 17:06:43 policy-pap | [2025-06-10T17:04:39.841+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 17:06:43 policy-pap | [2025-06-10T17:04:39.849+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 17:06:43 policy-pap | [2025-06-10T17:04:39.876+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-c2780152-0252-4655-a5ec-93d6bd33a775 17:06:43 policy-pap | [2025-06-10T17:04:39.877+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 17:06:43 policy-pap | [2025-06-10T17:04:39.877+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 17:06:43 policy-pap | [2025-06-10T17:04:39.900+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ad57df67-fea3-4cf4-a72d-c4d0b25956e5-3, groupId=ad57df67-fea3-4cf4-a72d-c4d0b25956e5] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 17:06:43 policy-pap | [2025-06-10T17:04:39.904+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ad57df67-fea3-4cf4-a72d-c4d0b25956e5-3, groupId=ad57df67-fea3-4cf4-a72d-c4d0b25956e5] (Re-)joining group 17:06:43 policy-pap | [2025-06-10T17:04:39.916+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ad57df67-fea3-4cf4-a72d-c4d0b25956e5-3, groupId=ad57df67-fea3-4cf4-a72d-c4d0b25956e5] Request joining group due to: need to re-join with the given member-id: consumer-ad57df67-fea3-4cf4-a72d-c4d0b25956e5-3-f6bd6332-99bb-45a3-816c-1d1da56a81f1 17:06:43 policy-pap | [2025-06-10T17:04:39.916+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ad57df67-fea3-4cf4-a72d-c4d0b25956e5-3, groupId=ad57df67-fea3-4cf4-a72d-c4d0b25956e5] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 17:06:43 policy-pap | [2025-06-10T17:04:39.917+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ad57df67-fea3-4cf4-a72d-c4d0b25956e5-3, groupId=ad57df67-fea3-4cf4-a72d-c4d0b25956e5] (Re-)joining group 17:06:43 policy-pap | [2025-06-10T17:04:41.618+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' 17:06:43 policy-pap | [2025-06-10T17:04:41.618+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' 17:06:43 policy-pap | [2025-06-10T17:04:41.621+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 3 ms 17:06:43 policy-pap | [2025-06-10T17:04:42.895+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-c2780152-0252-4655-a5ec-93d6bd33a775', protocol='range'} 17:06:43 policy-pap | [2025-06-10T17:04:42.910+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-c2780152-0252-4655-a5ec-93d6bd33a775=Assignment(partitions=[policy-pdp-pap-0])} 17:06:43 policy-pap | [2025-06-10T17:04:42.920+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ad57df67-fea3-4cf4-a72d-c4d0b25956e5-3, groupId=ad57df67-fea3-4cf4-a72d-c4d0b25956e5] Successfully joined group with generation Generation{generationId=1, memberId='consumer-ad57df67-fea3-4cf4-a72d-c4d0b25956e5-3-f6bd6332-99bb-45a3-816c-1d1da56a81f1', protocol='range'} 17:06:43 policy-pap | [2025-06-10T17:04:42.920+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ad57df67-fea3-4cf4-a72d-c4d0b25956e5-3, groupId=ad57df67-fea3-4cf4-a72d-c4d0b25956e5] Finished assignment for group at generation 1: {consumer-ad57df67-fea3-4cf4-a72d-c4d0b25956e5-3-f6bd6332-99bb-45a3-816c-1d1da56a81f1=Assignment(partitions=[policy-pdp-pap-0])} 17:06:43 policy-pap | [2025-06-10T17:04:42.931+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ad57df67-fea3-4cf4-a72d-c4d0b25956e5-3, groupId=ad57df67-fea3-4cf4-a72d-c4d0b25956e5] Successfully synced group in generation Generation{generationId=1, memberId='consumer-ad57df67-fea3-4cf4-a72d-c4d0b25956e5-3-f6bd6332-99bb-45a3-816c-1d1da56a81f1', protocol='range'} 17:06:43 policy-pap | [2025-06-10T17:04:42.932+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ad57df67-fea3-4cf4-a72d-c4d0b25956e5-3, groupId=ad57df67-fea3-4cf4-a72d-c4d0b25956e5] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 17:06:43 policy-pap | [2025-06-10T17:04:42.932+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-c2780152-0252-4655-a5ec-93d6bd33a775', protocol='range'} 17:06:43 policy-pap | [2025-06-10T17:04:42.933+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 17:06:43 policy-pap | [2025-06-10T17:04:42.935+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ad57df67-fea3-4cf4-a72d-c4d0b25956e5-3, groupId=ad57df67-fea3-4cf4-a72d-c4d0b25956e5] Adding newly assigned partitions: policy-pdp-pap-0 17:06:43 policy-pap | [2025-06-10T17:04:42.936+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 17:06:43 policy-pap | [2025-06-10T17:04:42.952+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 17:06:43 policy-pap | [2025-06-10T17:04:42.953+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ad57df67-fea3-4cf4-a72d-c4d0b25956e5-3, groupId=ad57df67-fea3-4cf4-a72d-c4d0b25956e5] Found no committed offset for partition policy-pdp-pap-0 17:06:43 policy-pap | [2025-06-10T17:04:42.980+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ad57df67-fea3-4cf4-a72d-c4d0b25956e5-3, groupId=ad57df67-fea3-4cf4-a72d-c4d0b25956e5] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 17:06:43 policy-pap | [2025-06-10T17:04:42.981+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 17:06:43 policy-pap | [2025-06-10T17:04:59.697+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-pdp-pap] ***** OrderedServiceImpl implementers: 17:06:43 policy-pap | [] 17:06:43 policy-pap | [2025-06-10T17:04:59.698+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:06:43 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"e8cf33fa-defb-4478-a917-e939b5903518","timestampMs":1749575099660,"name":"apex-b381139c-3990-4b87-838c-2cb3399159c9","pdpGroup":"defaultGroup"} 17:06:43 policy-pap | [2025-06-10T17:04:59.708+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 17:06:43 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"e8cf33fa-defb-4478-a917-e939b5903518","timestampMs":1749575099660,"name":"apex-b381139c-3990-4b87-838c-2cb3399159c9","pdpGroup":"defaultGroup"} 17:06:43 policy-pap | [2025-06-10T17:04:59.709+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 17:06:43 policy-pap | [2025-06-10T17:04:59.812+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-b381139c-3990-4b87-838c-2cb3399159c9 PdpUpdate starting 17:06:43 policy-pap | [2025-06-10T17:04:59.812+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-b381139c-3990-4b87-838c-2cb3399159c9 PdpUpdate starting listener 17:06:43 policy-pap | [2025-06-10T17:04:59.813+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-b381139c-3990-4b87-838c-2cb3399159c9 PdpUpdate starting timer 17:06:43 policy-pap | [2025-06-10T17:04:59.813+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=99ab4f96-dead-4d11-9a82-912492d03ee3, expireMs=1749575129813] 17:06:43 policy-pap | [2025-06-10T17:04:59.815+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-b381139c-3990-4b87-838c-2cb3399159c9 PdpUpdate starting enqueue 17:06:43 policy-pap | [2025-06-10T17:04:59.815+00:00|INFO|TimerManager|Thread-9] update timer waiting 29998ms Timer [name=99ab4f96-dead-4d11-9a82-912492d03ee3, expireMs=1749575129813] 17:06:43 policy-pap | [2025-06-10T17:04:59.816+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-b381139c-3990-4b87-838c-2cb3399159c9 PdpUpdate started 17:06:43 policy-pap | [2025-06-10T17:04:59.820+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 17:06:43 policy-pap | {"source":"pap-0af370a8-bdce-4da6-9b93-6af73062a6ed","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"99ab4f96-dead-4d11-9a82-912492d03ee3","timestampMs":1749575099796,"name":"apex-b381139c-3990-4b87-838c-2cb3399159c9","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:06:43 policy-pap | [2025-06-10T17:04:59.858+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 17:06:43 policy-pap | {"source":"pap-0af370a8-bdce-4da6-9b93-6af73062a6ed","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"99ab4f96-dead-4d11-9a82-912492d03ee3","timestampMs":1749575099796,"name":"apex-b381139c-3990-4b87-838c-2cb3399159c9","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:06:43 policy-pap | [2025-06-10T17:04:59.859+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:06:43 policy-pap | {"source":"pap-0af370a8-bdce-4da6-9b93-6af73062a6ed","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"99ab4f96-dead-4d11-9a82-912492d03ee3","timestampMs":1749575099796,"name":"apex-b381139c-3990-4b87-838c-2cb3399159c9","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:06:43 policy-pap | [2025-06-10T17:04:59.860+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 17:06:43 policy-pap | [2025-06-10T17:04:59.860+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 17:06:43 policy-pap | [2025-06-10T17:04:59.882+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:06:43 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"774212dd-8554-4351-943c-72a15aedec9e","timestampMs":1749575099870,"name":"apex-b381139c-3990-4b87-838c-2cb3399159c9","pdpGroup":"defaultGroup"} 17:06:43 policy-pap | [2025-06-10T17:04:59.883+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 17:06:43 policy-pap | [2025-06-10T17:04:59.883+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 17:06:43 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"774212dd-8554-4351-943c-72a15aedec9e","timestampMs":1749575099870,"name":"apex-b381139c-3990-4b87-838c-2cb3399159c9","pdpGroup":"defaultGroup"} 17:06:43 policy-pap | [2025-06-10T17:04:59.887+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:06:43 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"99ab4f96-dead-4d11-9a82-912492d03ee3","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"369b6db7-c168-40b8-bf74-c3e97bd3983d","timestampMs":1749575099872,"name":"apex-b381139c-3990-4b87-838c-2cb3399159c9","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:06:43 policy-pap | [2025-06-10T17:04:59.906+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b381139c-3990-4b87-838c-2cb3399159c9 PdpUpdate stopping 17:06:43 policy-pap | [2025-06-10T17:04:59.906+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b381139c-3990-4b87-838c-2cb3399159c9 PdpUpdate stopping enqueue 17:06:43 policy-pap | [2025-06-10T17:04:59.906+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b381139c-3990-4b87-838c-2cb3399159c9 PdpUpdate stopping timer 17:06:43 policy-pap | [2025-06-10T17:04:59.906+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=99ab4f96-dead-4d11-9a82-912492d03ee3, expireMs=1749575129813] 17:06:43 policy-pap | [2025-06-10T17:04:59.906+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b381139c-3990-4b87-838c-2cb3399159c9 PdpUpdate stopping listener 17:06:43 policy-pap | [2025-06-10T17:04:59.906+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b381139c-3990-4b87-838c-2cb3399159c9 PdpUpdate stopped 17:06:43 policy-pap | [2025-06-10T17:04:59.910+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 17:06:43 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"99ab4f96-dead-4d11-9a82-912492d03ee3","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"369b6db7-c168-40b8-bf74-c3e97bd3983d","timestampMs":1749575099872,"name":"apex-b381139c-3990-4b87-838c-2cb3399159c9","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:06:43 policy-pap | [2025-06-10T17:04:59.914+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 99ab4f96-dead-4d11-9a82-912492d03ee3 17:06:43 policy-pap | [2025-06-10T17:04:59.916+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-b381139c-3990-4b87-838c-2cb3399159c9 PdpUpdate successful 17:06:43 policy-pap | [2025-06-10T17:04:59.916+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-b381139c-3990-4b87-838c-2cb3399159c9 start publishing next request 17:06:43 policy-pap | [2025-06-10T17:04:59.916+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b381139c-3990-4b87-838c-2cb3399159c9 PdpStateChange starting 17:06:43 policy-pap | [2025-06-10T17:04:59.916+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b381139c-3990-4b87-838c-2cb3399159c9 PdpStateChange starting listener 17:06:43 policy-pap | [2025-06-10T17:04:59.916+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b381139c-3990-4b87-838c-2cb3399159c9 PdpStateChange starting timer 17:06:43 policy-pap | [2025-06-10T17:04:59.916+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=fbc081ab-e104-4874-a8f3-a6ccd6b5131e, expireMs=1749575129916] 17:06:43 policy-pap | [2025-06-10T17:04:59.916+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b381139c-3990-4b87-838c-2cb3399159c9 PdpStateChange starting enqueue 17:06:43 policy-pap | [2025-06-10T17:04:59.917+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b381139c-3990-4b87-838c-2cb3399159c9 PdpStateChange started 17:06:43 policy-pap | [2025-06-10T17:04:59.917+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 17:06:43 policy-pap | {"source":"pap-0af370a8-bdce-4da6-9b93-6af73062a6ed","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"fbc081ab-e104-4874-a8f3-a6ccd6b5131e","timestampMs":1749575099797,"name":"apex-b381139c-3990-4b87-838c-2cb3399159c9","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:06:43 policy-pap | [2025-06-10T17:04:59.917+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 29999ms Timer [name=fbc081ab-e104-4874-a8f3-a6ccd6b5131e, expireMs=1749575129916] 17:06:43 policy-pap | [2025-06-10T17:04:59.926+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 17:06:43 policy-pap | {"source":"pap-0af370a8-bdce-4da6-9b93-6af73062a6ed","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"fbc081ab-e104-4874-a8f3-a6ccd6b5131e","timestampMs":1749575099797,"name":"apex-b381139c-3990-4b87-838c-2cb3399159c9","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:06:43 policy-pap | [2025-06-10T17:04:59.927+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE 17:06:43 policy-pap | [2025-06-10T17:04:59.939+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 17:06:43 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"fbc081ab-e104-4874-a8f3-a6ccd6b5131e","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"84a9992b-f450-4185-947e-0a8fc223e234","timestampMs":1749575099929,"name":"apex-b381139c-3990-4b87-838c-2cb3399159c9","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:06:43 policy-pap | [2025-06-10T17:04:59.940+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id fbc081ab-e104-4874-a8f3-a6ccd6b5131e 17:06:43 policy-pap | [2025-06-10T17:04:59.958+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:06:43 policy-pap | {"source":"pap-0af370a8-bdce-4da6-9b93-6af73062a6ed","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"fbc081ab-e104-4874-a8f3-a6ccd6b5131e","timestampMs":1749575099797,"name":"apex-b381139c-3990-4b87-838c-2cb3399159c9","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:06:43 policy-pap | [2025-06-10T17:04:59.958+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE 17:06:43 policy-pap | [2025-06-10T17:04:59.961+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:06:43 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"fbc081ab-e104-4874-a8f3-a6ccd6b5131e","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"84a9992b-f450-4185-947e-0a8fc223e234","timestampMs":1749575099929,"name":"apex-b381139c-3990-4b87-838c-2cb3399159c9","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:06:43 policy-pap | [2025-06-10T17:04:59.962+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b381139c-3990-4b87-838c-2cb3399159c9 PdpStateChange stopping 17:06:43 policy-pap | [2025-06-10T17:04:59.962+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b381139c-3990-4b87-838c-2cb3399159c9 PdpStateChange stopping enqueue 17:06:43 policy-pap | [2025-06-10T17:04:59.962+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b381139c-3990-4b87-838c-2cb3399159c9 PdpStateChange stopping timer 17:06:43 policy-pap | [2025-06-10T17:04:59.962+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=fbc081ab-e104-4874-a8f3-a6ccd6b5131e, expireMs=1749575129916] 17:06:43 policy-pap | [2025-06-10T17:04:59.962+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b381139c-3990-4b87-838c-2cb3399159c9 PdpStateChange stopping listener 17:06:43 policy-pap | [2025-06-10T17:04:59.962+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b381139c-3990-4b87-838c-2cb3399159c9 PdpStateChange stopped 17:06:43 policy-pap | [2025-06-10T17:04:59.962+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-b381139c-3990-4b87-838c-2cb3399159c9 PdpStateChange successful 17:06:43 policy-pap | [2025-06-10T17:04:59.962+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-b381139c-3990-4b87-838c-2cb3399159c9 start publishing next request 17:06:43 policy-pap | [2025-06-10T17:04:59.963+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b381139c-3990-4b87-838c-2cb3399159c9 PdpUpdate starting 17:06:43 policy-pap | [2025-06-10T17:04:59.963+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b381139c-3990-4b87-838c-2cb3399159c9 PdpUpdate starting listener 17:06:43 policy-pap | [2025-06-10T17:04:59.963+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b381139c-3990-4b87-838c-2cb3399159c9 PdpUpdate starting timer 17:06:43 policy-pap | [2025-06-10T17:04:59.963+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=6dbd577d-130e-483f-86ef-661fbc249226, expireMs=1749575129963] 17:06:43 policy-pap | [2025-06-10T17:04:59.963+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b381139c-3990-4b87-838c-2cb3399159c9 PdpUpdate starting enqueue 17:06:43 policy-pap | [2025-06-10T17:04:59.963+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b381139c-3990-4b87-838c-2cb3399159c9 PdpUpdate started 17:06:43 policy-pap | [2025-06-10T17:04:59.963+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 17:06:43 policy-pap | {"source":"pap-0af370a8-bdce-4da6-9b93-6af73062a6ed","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"6dbd577d-130e-483f-86ef-661fbc249226","timestampMs":1749575099949,"name":"apex-b381139c-3990-4b87-838c-2cb3399159c9","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:06:43 policy-pap | [2025-06-10T17:04:59.976+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:06:43 policy-pap | {"source":"pap-0af370a8-bdce-4da6-9b93-6af73062a6ed","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"6dbd577d-130e-483f-86ef-661fbc249226","timestampMs":1749575099949,"name":"apex-b381139c-3990-4b87-838c-2cb3399159c9","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:06:43 policy-pap | [2025-06-10T17:04:59.976+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 17:06:43 policy-pap | {"source":"pap-0af370a8-bdce-4da6-9b93-6af73062a6ed","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"6dbd577d-130e-483f-86ef-661fbc249226","timestampMs":1749575099949,"name":"apex-b381139c-3990-4b87-838c-2cb3399159c9","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:06:43 policy-pap | [2025-06-10T17:04:59.977+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 17:06:43 policy-pap | [2025-06-10T17:04:59.977+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 17:06:43 policy-pap | [2025-06-10T17:04:59.986+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 17:06:43 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"6dbd577d-130e-483f-86ef-661fbc249226","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"50ccf63c-ca05-46ac-9241-090e0f0b34f9","timestampMs":1749575099977,"name":"apex-b381139c-3990-4b87-838c-2cb3399159c9","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:06:43 policy-pap | [2025-06-10T17:04:59.986+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 17:06:43 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"6dbd577d-130e-483f-86ef-661fbc249226","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"50ccf63c-ca05-46ac-9241-090e0f0b34f9","timestampMs":1749575099977,"name":"apex-b381139c-3990-4b87-838c-2cb3399159c9","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 17:06:43 policy-pap | [2025-06-10T17:04:59.987+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b381139c-3990-4b87-838c-2cb3399159c9 PdpUpdate stopping 17:06:43 policy-pap | [2025-06-10T17:04:59.987+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b381139c-3990-4b87-838c-2cb3399159c9 PdpUpdate stopping enqueue 17:06:43 policy-pap | [2025-06-10T17:04:59.987+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b381139c-3990-4b87-838c-2cb3399159c9 PdpUpdate stopping timer 17:06:43 policy-pap | [2025-06-10T17:04:59.987+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=6dbd577d-130e-483f-86ef-661fbc249226, expireMs=1749575129963] 17:06:43 policy-pap | [2025-06-10T17:04:59.987+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b381139c-3990-4b87-838c-2cb3399159c9 PdpUpdate stopping listener 17:06:43 policy-pap | [2025-06-10T17:04:59.987+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b381139c-3990-4b87-838c-2cb3399159c9 PdpUpdate stopped 17:06:43 policy-pap | [2025-06-10T17:04:59.988+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 6dbd577d-130e-483f-86ef-661fbc249226 17:06:43 policy-pap | [2025-06-10T17:04:59.991+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-b381139c-3990-4b87-838c-2cb3399159c9 PdpUpdate successful 17:06:43 policy-pap | [2025-06-10T17:04:59.991+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-b381139c-3990-4b87-838c-2cb3399159c9 has no more requests 17:06:43 policy-pap | [2025-06-10T17:05:29.814+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=99ab4f96-dead-4d11-9a82-912492d03ee3, expireMs=1749575129813] 17:06:43 policy-pap | [2025-06-10T17:05:29.916+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=fbc081ab-e104-4874-a8f3-a6ccd6b5131e, expireMs=1749575129916] 17:06:43 policy-pap | [2025-06-10T17:05:33.674+00:00|WARN|NonInjectionManager|pool-2-thread-1] Falling back to injection-less client. 17:06:43 policy-pap | [2025-06-10T17:05:33.721+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 17:06:43 policy-pap | [2025-06-10T17:05:33.731+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 17:06:43 policy-pap | [2025-06-10T17:05:33.732+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 17:06:43 policy-pap | [2025-06-10T17:05:34.125+00:00|INFO|SessionData|http-nio-6969-exec-7] unknown group testGroup 17:06:43 policy-pap | [2025-06-10T17:05:34.611+00:00|INFO|SessionData|http-nio-6969-exec-7] create cached group testGroup 17:06:43 policy-pap | [2025-06-10T17:05:34.611+00:00|INFO|SessionData|http-nio-6969-exec-7] creating DB group testGroup 17:06:43 policy-pap | [2025-06-10T17:05:35.130+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup 17:06:43 policy-pap | [2025-06-10T17:05:35.369+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] Registering a deploy for policy onap.restart.tca 1.0.0 17:06:43 policy-pap | [2025-06-10T17:05:35.488+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] Registering a deploy for policy operational.apex.decisionMaker 1.0.0 17:06:43 policy-pap | [2025-06-10T17:05:35.488+00:00|INFO|SessionData|http-nio-6969-exec-1] update cached group testGroup 17:06:43 policy-pap | [2025-06-10T17:05:35.489+00:00|INFO|SessionData|http-nio-6969-exec-1] updating DB group testGroup 17:06:43 policy-pap | [2025-06-10T17:05:35.503+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2025-06-10T17:05:35Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=DEPLOYMENT, timestamp=2025-06-10T17:05:35Z, user=policyadmin)] 17:06:43 policy-pap | [2025-06-10T17:05:36.177+00:00|INFO|SessionData|http-nio-6969-exec-6] cache group testGroup 17:06:43 policy-pap | [2025-06-10T17:05:36.178+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-6] remove policy onap.restart.tca 1.0.0 from subgroup testGroup pdpTypeA count=0 17:06:43 policy-pap | [2025-06-10T17:05:36.179+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] Registering an undeploy for policy onap.restart.tca 1.0.0 17:06:43 policy-pap | [2025-06-10T17:05:36.179+00:00|INFO|SessionData|http-nio-6969-exec-6] update cached group testGroup 17:06:43 policy-pap | [2025-06-10T17:05:36.179+00:00|INFO|SessionData|http-nio-6969-exec-6] updating DB group testGroup 17:06:43 policy-pap | [2025-06-10T17:05:36.192+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2025-06-10T17:05:36Z, user=policyadmin)] 17:06:43 policy-pap | [2025-06-10T17:05:36.528+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group defaultGroup 17:06:43 policy-pap | [2025-06-10T17:05:36.528+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group testGroup 17:06:43 policy-pap | [2025-06-10T17:05:36.528+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-5] remove policy operational.apex.decisionMaker 1.0.0 from subgroup testGroup pdpTypeC count=0 17:06:43 policy-pap | [2025-06-10T17:05:36.528+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] Registering an undeploy for policy operational.apex.decisionMaker 1.0.0 17:06:43 policy-pap | [2025-06-10T17:05:36.529+00:00|INFO|SessionData|http-nio-6969-exec-5] update cached group testGroup 17:06:43 policy-pap | [2025-06-10T17:05:36.529+00:00|INFO|SessionData|http-nio-6969-exec-5] updating DB group testGroup 17:06:43 policy-pap | [2025-06-10T17:05:36.537+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=UNDEPLOYMENT, timestamp=2025-06-10T17:05:36Z, user=policyadmin)] 17:06:43 policy-pap | [2025-06-10T17:05:37.091+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup 17:06:43 policy-pap | [2025-06-10T17:05:37.094+00:00|INFO|SessionData|http-nio-6969-exec-1] deleting DB group testGroup 17:06:43 policy-pap | [2025-06-10T17:06:38.434+00:00|INFO|PdpModifyRequestMap|pool-3-thread-1] check for PDP records older than 360000ms 17:06:43 =================================== 17:06:43 ======== Logs from prometheus ======== 17:06:43 prometheus | time=2025-06-10T17:03:58.645Z level=INFO source=main.go:674 msg="No time or size retention was set so using the default time retention" duration=15d 17:06:43 prometheus | time=2025-06-10T17:03:58.645Z level=INFO source=main.go:725 msg="Starting Prometheus Server" mode=server version="(version=3.4.1, branch=HEAD, revision=aea6503d9bbaad6c5faff3ecf6f1025213356c92)" 17:06:43 prometheus | time=2025-06-10T17:03:58.646Z level=INFO source=main.go:730 msg="operational information" build_context="(go=go1.24.3, platform=linux/amd64, user=root@16f976c24db1, date=20250531-10:44:38, tags=netgo,builtinassets,stringlabels)" host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" fd_limits="(soft=1048576, hard=1048576)" vm_limits="(soft=unlimited, hard=unlimited)" 17:06:43 prometheus | time=2025-06-10T17:03:58.647Z level=INFO source=main.go:806 msg="Leaving GOMAXPROCS=8: CPU quota undefined" component=automaxprocs 17:06:43 prometheus | time=2025-06-10T17:03:58.649Z level=INFO source=web.go:656 msg="Start listening for connections" component=web address=0.0.0.0:9090 17:06:43 prometheus | time=2025-06-10T17:03:58.650Z level=INFO source=main.go:1266 msg="Starting TSDB ..." 17:06:43 prometheus | time=2025-06-10T17:03:58.652Z level=INFO source=tls_config.go:347 msg="Listening on" component=web address=[::]:9090 17:06:43 prometheus | time=2025-06-10T17:03:58.652Z level=INFO source=tls_config.go:350 msg="TLS is disabled." component=web http2=false address=[::]:9090 17:06:43 prometheus | time=2025-06-10T17:03:58.659Z level=INFO source=head.go:657 msg="Replaying on-disk memory mappable chunks if any" component=tsdb 17:06:43 prometheus | time=2025-06-10T17:03:58.659Z level=INFO source=head.go:744 msg="On-disk memory mappable chunks replay completed" component=tsdb duration=5.24µs 17:06:43 prometheus | time=2025-06-10T17:03:58.659Z level=INFO source=head.go:752 msg="Replaying WAL, this may take a while" component=tsdb 17:06:43 prometheus | time=2025-06-10T17:03:58.660Z level=INFO source=head.go:825 msg="WAL segment loaded" component=tsdb segment=0 maxSegment=0 duration=612.768µs 17:06:43 prometheus | time=2025-06-10T17:03:58.660Z level=INFO source=head.go:862 msg="WAL replay completed" component=tsdb checkpoint_replay_duration=265.094µs wal_replay_duration=646.379µs wbl_replay_duration=330ns chunk_snapshot_load_duration=0s mmap_chunk_replay_duration=5.24µs total_replay_duration=1.051365ms 17:06:43 prometheus | time=2025-06-10T17:03:58.663Z level=INFO source=main.go:1287 msg="filesystem information" fs_type=EXT4_SUPER_MAGIC 17:06:43 prometheus | time=2025-06-10T17:03:58.663Z level=INFO source=main.go:1290 msg="TSDB started" 17:06:43 prometheus | time=2025-06-10T17:03:58.663Z level=INFO source=main.go:1475 msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 17:06:43 prometheus | time=2025-06-10T17:03:58.665Z level=INFO source=main.go:1514 msg="updated GOGC" old=100 new=75 17:06:43 prometheus | time=2025-06-10T17:03:58.665Z level=INFO source=main.go:1524 msg="Completed loading of configuration file" db_storage=2.5µs remote_storage=2.8µs web_handler=1.28µs query_engine=2.141µs scrape=378.015µs scrape_sd=135.582µs notify=194.803µs notify_sd=19.64µs rules=2.87µs tracing=5.96µs filename=/etc/prometheus/prometheus.yml totalDuration=1.543642ms 17:06:43 prometheus | time=2025-06-10T17:03:58.665Z level=INFO source=main.go:1251 msg="Server is ready to receive web requests." 17:06:43 prometheus | time=2025-06-10T17:03:58.665Z level=INFO source=manager.go:175 msg="Starting rule manager..." component="rule manager" 17:06:43 =================================== 17:06:43 ======== Logs from simulator ======== 17:06:43 simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json 17:06:43 simulator | overriding logback.xml 17:06:43 simulator | 2025-06-10 17:04:04,251 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json 17:06:43 simulator | 2025-06-10 17:04:04,323 INFO org.onap.policy.models.simulators starting 17:06:43 simulator | 2025-06-10 17:04:04,324 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties 17:06:43 simulator | 2025-06-10 17:04:04,529 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION 17:06:43 simulator | 2025-06-10 17:04:04,530 INFO org.onap.policy.models.simulators starting A&AI simulator 17:06:43 simulator | 2025-06-10 17:04:04,719 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 17:06:43 simulator | 2025-06-10 17:04:04,731 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 17:06:43 simulator | 2025-06-10 17:04:04,733 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 17:06:43 simulator | 2025-06-10 17:04:04,738 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 17:06:43 simulator | 2025-06-10 17:04:04,822 INFO Session workerName=node0 17:06:43 simulator | 2025-06-10 17:04:05,489 INFO Using GSON for REST calls 17:06:43 simulator | 2025-06-10 17:04:05,597 INFO Started o.e.j.s.ServletContextHandler@3f6db3fb{/,null,AVAILABLE} 17:06:43 simulator | 2025-06-10 17:04:05,607 INFO Started A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} 17:06:43 simulator | 2025-06-10 17:04:05,614 INFO Started Server@64a8c844{STARTING}[11.0.20,sto=0] @1838ms 17:06:43 simulator | 2025-06-10 17:04:05,614 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,AVAILABLE}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4119 ms. 17:06:43 simulator | 2025-06-10 17:04:05,638 INFO org.onap.policy.models.simulators starting SDNC simulator 17:06:43 simulator | 2025-06-10 17:04:05,644 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 17:06:43 simulator | 2025-06-10 17:04:05,644 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 17:06:43 simulator | 2025-06-10 17:04:05,651 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 17:06:43 simulator | 2025-06-10 17:04:05,654 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 17:06:43 simulator | 2025-06-10 17:04:05,665 INFO Session workerName=node0 17:06:43 simulator | 2025-06-10 17:04:05,755 INFO Using GSON for REST calls 17:06:43 simulator | 2025-06-10 17:04:05,780 INFO Started o.e.j.s.ServletContextHandler@b70da4c{/,null,AVAILABLE} 17:06:43 simulator | 2025-06-10 17:04:05,786 INFO Started SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} 17:06:43 simulator | 2025-06-10 17:04:05,787 INFO Started Server@70efb718{STARTING}[11.0.20,sto=0] @2010ms 17:06:43 simulator | 2025-06-10 17:04:05,787 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,AVAILABLE}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4859 ms. 17:06:43 simulator | 2025-06-10 17:04:05,832 INFO org.onap.policy.models.simulators starting SO simulator 17:06:43 simulator | 2025-06-10 17:04:05,840 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 17:06:43 simulator | 2025-06-10 17:04:05,842 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 17:06:43 simulator | 2025-06-10 17:04:05,843 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 17:06:43 simulator | 2025-06-10 17:04:05,844 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 17:06:43 simulator | 2025-06-10 17:04:05,869 INFO Session workerName=node0 17:06:43 simulator | 2025-06-10 17:04:05,952 INFO Using GSON for REST calls 17:06:43 simulator | 2025-06-10 17:04:05,965 INFO Started o.e.j.s.ServletContextHandler@696f0212{/,null,AVAILABLE} 17:06:43 simulator | 2025-06-10 17:04:05,976 INFO Started SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} 17:06:43 simulator | 2025-06-10 17:04:05,977 INFO Started Server@b7838a9{STARTING}[11.0.20,sto=0] @2200ms 17:06:43 simulator | 2025-06-10 17:04:05,977 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,AVAILABLE}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4866 ms. 17:06:43 simulator | 2025-06-10 17:04:05,978 INFO org.onap.policy.models.simulators starting VFC simulator 17:06:43 simulator | 2025-06-10 17:04:05,981 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 17:06:43 simulator | 2025-06-10 17:04:05,981 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 17:06:43 simulator | 2025-06-10 17:04:05,983 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 17:06:43 simulator | 2025-06-10 17:04:05,984 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 17:06:43 simulator | 2025-06-10 17:04:05,987 INFO Session workerName=node0 17:06:43 simulator | 2025-06-10 17:04:06,031 INFO Using GSON for REST calls 17:06:43 simulator | 2025-06-10 17:04:06,040 INFO Started o.e.j.s.ServletContextHandler@19553973{/,null,AVAILABLE} 17:06:43 simulator | 2025-06-10 17:04:06,041 INFO Started VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670} 17:06:43 simulator | 2025-06-10 17:04:06,041 INFO Started Server@f478a81{STARTING}[11.0.20,sto=0] @2265ms 17:06:43 simulator | 2025-06-10 17:04:06,042 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,AVAILABLE}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4940 ms. 17:06:43 simulator | 2025-06-10 17:04:06,043 INFO org.onap.policy.models.simulators started 17:06:43 =================================== 17:06:43 ======== Logs from zookeeper ======== 17:06:43 zookeeper | ===> User 17:06:43 zookeeper | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 17:06:43 zookeeper | ===> Configuring ... 17:06:43 zookeeper | ===> Running preflight checks ... 17:06:43 zookeeper | ===> Check if /var/lib/zookeeper/data is writable ... 17:06:43 zookeeper | ===> Check if /var/lib/zookeeper/log is writable ... 17:06:43 zookeeper | ===> Launching ... 17:06:43 zookeeper | ===> Launching zookeeper ... 17:06:43 zookeeper | [2025-06-10 17:04:02,913] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 17:06:43 zookeeper | [2025-06-10 17:04:02,916] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 17:06:43 zookeeper | [2025-06-10 17:04:02,916] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 17:06:43 zookeeper | [2025-06-10 17:04:02,916] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 17:06:43 zookeeper | [2025-06-10 17:04:02,916] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 17:06:43 zookeeper | [2025-06-10 17:04:02,918] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) 17:06:43 zookeeper | [2025-06-10 17:04:02,918] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) 17:06:43 zookeeper | [2025-06-10 17:04:02,918] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) 17:06:43 zookeeper | [2025-06-10 17:04:02,918] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) 17:06:43 zookeeper | [2025-06-10 17:04:02,920] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) 17:06:43 zookeeper | [2025-06-10 17:04:02,920] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 17:06:43 zookeeper | [2025-06-10 17:04:02,921] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 17:06:43 zookeeper | [2025-06-10 17:04:02,921] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 17:06:43 zookeeper | [2025-06-10 17:04:02,921] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 17:06:43 zookeeper | [2025-06-10 17:04:02,921] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 17:06:43 zookeeper | [2025-06-10 17:04:02,921] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) 17:06:43 zookeeper | [2025-06-10 17:04:02,933] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@3bbc39f8 (org.apache.zookeeper.server.ServerMetrics) 17:06:43 zookeeper | [2025-06-10 17:04:02,935] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 17:06:43 zookeeper | [2025-06-10 17:04:02,936] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 17:06:43 zookeeper | [2025-06-10 17:04:02,938] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 17:06:43 zookeeper | [2025-06-10 17:04:02,946] INFO (org.apache.zookeeper.server.ZooKeeperServer) 17:06:43 zookeeper | [2025-06-10 17:04:02,946] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) 17:06:43 zookeeper | [2025-06-10 17:04:02,946] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) 17:06:43 zookeeper | [2025-06-10 17:04:02,947] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) 17:06:43 zookeeper | [2025-06-10 17:04:02,947] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) 17:06:43 zookeeper | [2025-06-10 17:04:02,947] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) 17:06:43 zookeeper | [2025-06-10 17:04:02,947] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) 17:06:43 zookeeper | [2025-06-10 17:04:02,947] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) 17:06:43 zookeeper | [2025-06-10 17:04:02,947] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) 17:06:43 zookeeper | [2025-06-10 17:04:02,947] INFO (org.apache.zookeeper.server.ZooKeeperServer) 17:06:43 zookeeper | [2025-06-10 17:04:02,948] INFO Server environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.server.ZooKeeperServer) 17:06:43 zookeeper | [2025-06-10 17:04:02,948] INFO Server environment:host.name=zookeeper (org.apache.zookeeper.server.ZooKeeperServer) 17:06:43 zookeeper | [2025-06-10 17:04:02,948] INFO Server environment:java.version=17.0.14 (org.apache.zookeeper.server.ZooKeeperServer) 17:06:43 zookeeper | [2025-06-10 17:04:02,948] INFO Server environment:java.vendor=Eclipse Adoptium (org.apache.zookeeper.server.ZooKeeperServer) 17:06:43 zookeeper | [2025-06-10 17:04:02,948] INFO Server environment:java.home=/usr/lib/jvm/temurin-17-jre (org.apache.zookeeper.server.ZooKeeperServer) 17:06:43 zookeeper | [2025-06-10 17:04:02,948] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/kafka-streams-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-transaction-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/connect-transforms-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-clients-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/connect-mirror-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.16.2.jar:/usr/bin/../share/java/kafka/scala-library-2.13.15.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.118.Final.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.16.2.jar:/usr/bin/../share/java/kafka/connect-runtime-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-afterburner-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/protobuf-java-3.25.5.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/maven-artifact-3.9.6.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/trogdor-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-server-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.15.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/commons-lang3-3.12.0.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.118.Final.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.6-4.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-tools-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jackson-databind-2.16.2.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/opentelemetry-proto-1.0.0-alpha.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-json-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.16.2.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/kafka-raft-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/plexus-utils-3.5.1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/commons-io-2.14.0.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.16.2.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/jackson-core-2.16.2.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.118.Final.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-storage-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.57.v20241219.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) 17:06:43 zookeeper | [2025-06-10 17:04:02,949] INFO Server environment:java.library.path=/usr/local/lib64:/usr/local/lib::/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) 17:06:43 zookeeper | [2025-06-10 17:04:02,949] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) 17:06:43 zookeeper | [2025-06-10 17:04:02,949] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) 17:06:43 zookeeper | [2025-06-10 17:04:02,949] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) 17:06:43 zookeeper | [2025-06-10 17:04:02,949] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) 17:06:43 zookeeper | [2025-06-10 17:04:02,949] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) 17:06:43 zookeeper | [2025-06-10 17:04:02,949] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) 17:06:43 zookeeper | [2025-06-10 17:04:02,949] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 17:06:43 zookeeper | [2025-06-10 17:04:02,949] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 17:06:43 zookeeper | [2025-06-10 17:04:02,949] INFO Server environment:os.memory.free=494MB (org.apache.zookeeper.server.ZooKeeperServer) 17:06:43 zookeeper | [2025-06-10 17:04:02,949] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) 17:06:43 zookeeper | [2025-06-10 17:04:02,950] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) 17:06:43 zookeeper | [2025-06-10 17:04:02,950] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) 17:06:43 zookeeper | [2025-06-10 17:04:02,950] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 17:06:43 zookeeper | [2025-06-10 17:04:02,950] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 17:06:43 zookeeper | [2025-06-10 17:04:02,950] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 17:06:43 zookeeper | [2025-06-10 17:04:02,950] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 17:06:43 zookeeper | [2025-06-10 17:04:02,950] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) 17:06:43 zookeeper | [2025-06-10 17:04:02,950] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) 17:06:43 zookeeper | [2025-06-10 17:04:02,951] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) 17:06:43 zookeeper | [2025-06-10 17:04:02,952] INFO minSessionTimeout set to 6000 ms (org.apache.zookeeper.server.ZooKeeperServer) 17:06:43 zookeeper | [2025-06-10 17:04:02,952] INFO maxSessionTimeout set to 60000 ms (org.apache.zookeeper.server.ZooKeeperServer) 17:06:43 zookeeper | [2025-06-10 17:04:02,953] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 17:06:43 zookeeper | [2025-06-10 17:04:02,953] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 17:06:43 zookeeper | [2025-06-10 17:04:02,954] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 17:06:43 zookeeper | [2025-06-10 17:04:02,954] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 17:06:43 zookeeper | [2025-06-10 17:04:02,954] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 17:06:43 zookeeper | [2025-06-10 17:04:02,954] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 17:06:43 zookeeper | [2025-06-10 17:04:02,954] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 17:06:43 zookeeper | [2025-06-10 17:04:02,955] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 17:06:43 zookeeper | [2025-06-10 17:04:02,956] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) 17:06:43 zookeeper | [2025-06-10 17:04:02,956] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) 17:06:43 zookeeper | [2025-06-10 17:04:02,957] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) 17:06:43 zookeeper | [2025-06-10 17:04:02,957] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) 17:06:43 zookeeper | [2025-06-10 17:04:02,957] INFO Created server with tickTime 3000 ms minSessionTimeout 6000 ms maxSessionTimeout 60000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) 17:06:43 zookeeper | [2025-06-10 17:04:02,976] INFO Logging initialized @477ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) 17:06:43 zookeeper | [2025-06-10 17:04:03,030] WARN o.e.j.s.ServletContextHandler@6150c3ec{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) 17:06:43 zookeeper | [2025-06-10 17:04:03,030] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) 17:06:43 zookeeper | [2025-06-10 17:04:03,046] INFO jetty-9.4.57.v20241219; built: 2025-01-08T21:24:30.412Z; git: df524e6b29271c2e09ba9aea83c18dc9db464a31; jvm 17.0.14+7 (org.eclipse.jetty.server.Server) 17:06:43 zookeeper | [2025-06-10 17:04:03,075] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) 17:06:43 zookeeper | [2025-06-10 17:04:03,075] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) 17:06:43 zookeeper | [2025-06-10 17:04:03,076] INFO node0 Scavenging every 600000ms (org.eclipse.jetty.server.session) 17:06:43 zookeeper | [2025-06-10 17:04:03,079] WARN ServletContext@o.e.j.s.ServletContextHandler@6150c3ec{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) 17:06:43 zookeeper | [2025-06-10 17:04:03,087] INFO Started o.e.j.s.ServletContextHandler@6150c3ec{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) 17:06:43 zookeeper | [2025-06-10 17:04:03,096] INFO Started ServerConnector@222545dc{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) 17:06:43 zookeeper | [2025-06-10 17:04:03,096] INFO Started @601ms (org.eclipse.jetty.server.Server) 17:06:43 zookeeper | [2025-06-10 17:04:03,096] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) 17:06:43 zookeeper | [2025-06-10 17:04:03,100] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) 17:06:43 zookeeper | [2025-06-10 17:04:03,100] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) 17:06:43 zookeeper | [2025-06-10 17:04:03,101] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) 17:06:43 zookeeper | [2025-06-10 17:04:03,102] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) 17:06:43 zookeeper | [2025-06-10 17:04:03,118] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 17:06:43 zookeeper | [2025-06-10 17:04:03,118] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 17:06:43 zookeeper | [2025-06-10 17:04:03,118] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) 17:06:43 zookeeper | [2025-06-10 17:04:03,118] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) 17:06:43 zookeeper | [2025-06-10 17:04:03,123] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) 17:06:43 zookeeper | [2025-06-10 17:04:03,123] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 17:06:43 zookeeper | [2025-06-10 17:04:03,126] INFO Snapshot loaded in 8 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) 17:06:43 zookeeper | [2025-06-10 17:04:03,126] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 17:06:43 zookeeper | [2025-06-10 17:04:03,127] INFO Snapshot taken in 1 ms (org.apache.zookeeper.server.ZooKeeperServer) 17:06:43 zookeeper | [2025-06-10 17:04:03,133] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) 17:06:43 zookeeper | [2025-06-10 17:04:03,134] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) 17:06:43 zookeeper | [2025-06-10 17:04:03,148] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) 17:06:43 zookeeper | [2025-06-10 17:04:03,149] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) 17:06:43 zookeeper | [2025-06-10 17:04:04,105] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) 17:06:43 =================================== 17:06:43 Tearing down containers... 17:06:43 Container policy-csit Stopping 17:06:43 Container grafana Stopping 17:06:43 Container policy-apex-pdp Stopping 17:06:43 Container policy-csit Stopped 17:06:43 Container policy-csit Removing 17:06:43 Container policy-csit Removed 17:06:44 Container grafana Stopped 17:06:44 Container grafana Removing 17:06:44 Container grafana Removed 17:06:44 Container prometheus Stopping 17:06:44 Container prometheus Stopped 17:06:44 Container prometheus Removing 17:06:44 Container prometheus Removed 17:06:54 Container policy-apex-pdp Stopped 17:06:54 Container policy-apex-pdp Removing 17:06:54 Container policy-apex-pdp Removed 17:06:54 Container policy-pap Stopping 17:06:54 Container simulator Stopping 17:07:04 Container simulator Stopped 17:07:04 Container simulator Removing 17:07:04 Container simulator Removed 17:07:04 Container policy-pap Stopped 17:07:04 Container policy-pap Removing 17:07:04 Container policy-pap Removed 17:07:04 Container policy-api Stopping 17:07:04 Container kafka Stopping 17:07:05 Container kafka Stopped 17:07:05 Container kafka Removing 17:07:05 Container kafka Removed 17:07:05 Container zookeeper Stopping 17:07:06 Container zookeeper Stopped 17:07:06 Container zookeeper Removing 17:07:06 Container zookeeper Removed 17:07:14 Container policy-api Stopped 17:07:14 Container policy-api Removing 17:07:14 Container policy-api Removed 17:07:14 Container policy-db-migrator Stopping 17:07:14 Container policy-db-migrator Stopped 17:07:14 Container policy-db-migrator Removing 17:07:14 Container policy-db-migrator Removed 17:07:14 Container mariadb Stopping 17:07:15 Container mariadb Stopped 17:07:15 Container mariadb Removing 17:07:15 Container mariadb Removed 17:07:15 Network compose_default Removing 17:07:15 Network compose_default Removed 17:07:15 $ ssh-agent -k 17:07:15 unset SSH_AUTH_SOCK; 17:07:15 unset SSH_AGENT_PID; 17:07:15 echo Agent pid 2036 killed; 17:07:15 [ssh-agent] Stopped. 17:07:15 Robot results publisher started... 17:07:15 INFO: Checking test criticality is deprecated and will be dropped in a future release! 17:07:15 -Parsing output xml: 17:07:16 Done! 17:07:16 -Copying log files to build dir: 17:07:16 Done! 17:07:16 -Assigning results to build: 17:07:16 Done! 17:07:16 -Checking thresholds: 17:07:16 Done! 17:07:16 Done publishing Robot results. 17:07:16 [PostBuildScript] - [INFO] Executing post build scripts. 17:07:16 [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins16999657024836674727.sh 17:07:16 ---> sysstat.sh 17:07:16 [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins9003494525586574318.sh 17:07:16 ---> package-listing.sh 17:07:16 ++ facter osfamily 17:07:16 ++ tr '[:upper:]' '[:lower:]' 17:07:17 + OS_FAMILY=debian 17:07:17 + workspace=/w/workspace/policy-pap-newdelhi-project-csit-pap 17:07:17 + START_PACKAGES=/tmp/packages_start.txt 17:07:17 + END_PACKAGES=/tmp/packages_end.txt 17:07:17 + DIFF_PACKAGES=/tmp/packages_diff.txt 17:07:17 + PACKAGES=/tmp/packages_start.txt 17:07:17 + '[' /w/workspace/policy-pap-newdelhi-project-csit-pap ']' 17:07:17 + PACKAGES=/tmp/packages_end.txt 17:07:17 + case "${OS_FAMILY}" in 17:07:17 + dpkg -l 17:07:17 + grep '^ii' 17:07:17 + '[' -f /tmp/packages_start.txt ']' 17:07:17 + '[' -f /tmp/packages_end.txt ']' 17:07:17 + diff /tmp/packages_start.txt /tmp/packages_end.txt 17:07:17 + '[' /w/workspace/policy-pap-newdelhi-project-csit-pap ']' 17:07:17 + mkdir -p /w/workspace/policy-pap-newdelhi-project-csit-pap/archives/ 17:07:17 + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-pap-newdelhi-project-csit-pap/archives/ 17:07:17 [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins21823023342884908.sh 17:07:17 ---> capture-instance-metadata.sh 17:07:17 Setup pyenv: 17:07:17 system 17:07:17 3.8.13 17:07:17 3.9.13 17:07:17 * 3.10.6 (set by /w/workspace/policy-pap-newdelhi-project-csit-pap/.python-version) 17:07:17 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-60gH from file:/tmp/.os_lf_venv 17:07:19 lf-activate-venv(): INFO: Installing: lftools 17:07:27 lf-activate-venv(): INFO: Adding /tmp/venv-60gH/bin to PATH 17:07:27 INFO: Running in OpenStack, capturing instance metadata 17:07:28 [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins13697178193700364428.sh 17:07:28 provisioning config files... 17:07:28 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-pap-newdelhi-project-csit-pap@tmp/config17393277216410666810tmp 17:07:28 Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] 17:07:28 Run condition [Regular expression match] preventing perform for step [Provide Configuration files] 17:07:28 [EnvInject] - Injecting environment variables from a build step. 17:07:28 [EnvInject] - Injecting as environment variables the properties content 17:07:28 SERVER_ID=logs 17:07:28 17:07:28 [EnvInject] - Variables injected successfully. 17:07:28 [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins2191991707003075106.sh 17:07:28 ---> create-netrc.sh 17:07:28 [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins13644408974026702446.sh 17:07:28 ---> python-tools-install.sh 17:07:28 Setup pyenv: 17:07:28 system 17:07:28 3.8.13 17:07:28 3.9.13 17:07:28 * 3.10.6 (set by /w/workspace/policy-pap-newdelhi-project-csit-pap/.python-version) 17:07:28 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-60gH from file:/tmp/.os_lf_venv 17:07:30 lf-activate-venv(): INFO: Installing: lftools 17:07:38 lf-activate-venv(): INFO: Adding /tmp/venv-60gH/bin to PATH 17:07:38 [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins13016731283620007863.sh 17:07:38 ---> sudo-logs.sh 17:07:38 Archiving 'sudo' log.. 17:07:38 [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins730987250893066664.sh 17:07:38 ---> job-cost.sh 17:07:38 Setup pyenv: 17:07:38 system 17:07:38 3.8.13 17:07:38 3.9.13 17:07:38 * 3.10.6 (set by /w/workspace/policy-pap-newdelhi-project-csit-pap/.python-version) 17:07:38 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-60gH from file:/tmp/.os_lf_venv 17:07:40 lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 17:07:45 lf-activate-venv(): INFO: Adding /tmp/venv-60gH/bin to PATH 17:07:45 INFO: No Stack... 17:07:45 INFO: Retrieving Pricing Info for: v3-standard-8 17:07:45 INFO: Archiving Costs 17:07:45 [policy-pap-newdelhi-project-csit-pap] $ /bin/bash -l /tmp/jenkins2424616695379260324.sh 17:07:45 ---> logs-deploy.sh 17:07:45 Setup pyenv: 17:07:45 system 17:07:45 3.8.13 17:07:45 3.9.13 17:07:45 * 3.10.6 (set by /w/workspace/policy-pap-newdelhi-project-csit-pap/.python-version) 17:07:46 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-60gH from file:/tmp/.os_lf_venv 17:07:47 lf-activate-venv(): INFO: Installing: lftools 17:07:55 lf-activate-venv(): INFO: Adding /tmp/venv-60gH/bin to PATH 17:07:55 INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-pap-newdelhi-project-csit-pap/388 17:07:55 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt 17:07:56 Archives upload complete. 17:07:57 INFO: archiving logs to Nexus 17:07:57 ---> uname -a: 17:07:57 Linux prd-ubuntu1804-docker-8c-8g-20057 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux 17:07:57 17:07:57 17:07:57 ---> lscpu: 17:07:57 Architecture: x86_64 17:07:57 CPU op-mode(s): 32-bit, 64-bit 17:07:57 Byte Order: Little Endian 17:07:57 CPU(s): 8 17:07:57 On-line CPU(s) list: 0-7 17:07:57 Thread(s) per core: 1 17:07:57 Core(s) per socket: 1 17:07:57 Socket(s): 8 17:07:57 NUMA node(s): 1 17:07:57 Vendor ID: AuthenticAMD 17:07:57 CPU family: 23 17:07:57 Model: 49 17:07:57 Model name: AMD EPYC-Rome Processor 17:07:57 Stepping: 0 17:07:57 CPU MHz: 2800.000 17:07:57 BogoMIPS: 5600.00 17:07:57 Virtualization: AMD-V 17:07:57 Hypervisor vendor: KVM 17:07:57 Virtualization type: full 17:07:57 L1d cache: 32K 17:07:57 L1i cache: 32K 17:07:57 L2 cache: 512K 17:07:57 L3 cache: 16384K 17:07:57 NUMA node0 CPU(s): 0-7 17:07:57 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities 17:07:57 17:07:57 17:07:57 ---> nproc: 17:07:57 8 17:07:57 17:07:57 17:07:57 ---> df -h: 17:07:57 Filesystem Size Used Avail Use% Mounted on 17:07:57 udev 16G 0 16G 0% /dev 17:07:57 tmpfs 3.2G 708K 3.2G 1% /run 17:07:57 /dev/vda1 155G 15G 141G 10% / 17:07:57 tmpfs 16G 0 16G 0% /dev/shm 17:07:57 tmpfs 5.0M 0 5.0M 0% /run/lock 17:07:57 tmpfs 16G 0 16G 0% /sys/fs/cgroup 17:07:57 /dev/vda15 105M 4.4M 100M 5% /boot/efi 17:07:57 tmpfs 3.2G 0 3.2G 0% /run/user/1001 17:07:57 17:07:57 17:07:57 ---> free -m: 17:07:57 total used free shared buff/cache available 17:07:57 Mem: 32167 862 24571 0 6733 30849 17:07:57 Swap: 1023 0 1023 17:07:57 17:07:57 17:07:57 ---> ip addr: 17:07:57 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 17:07:57 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 17:07:57 inet 127.0.0.1/8 scope host lo 17:07:57 valid_lft forever preferred_lft forever 17:07:57 inet6 ::1/128 scope host 17:07:57 valid_lft forever preferred_lft forever 17:07:57 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 17:07:57 link/ether fa:16:3e:ef:61:3f brd ff:ff:ff:ff:ff:ff 17:07:57 inet 10.30.107.28/23 brd 10.30.107.255 scope global dynamic ens3 17:07:57 valid_lft 85990sec preferred_lft 85990sec 17:07:57 inet6 fe80::f816:3eff:feef:613f/64 scope link 17:07:57 valid_lft forever preferred_lft forever 17:07:57 3: docker0: mtu 1500 qdisc noqueue state DOWN group default 17:07:57 link/ether 02:42:17:b1:db:5b brd ff:ff:ff:ff:ff:ff 17:07:57 inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 17:07:57 valid_lft forever preferred_lft forever 17:07:57 inet6 fe80::42:17ff:feb1:db5b/64 scope link 17:07:57 valid_lft forever preferred_lft forever 17:07:57 17:07:57 17:07:57 ---> sar -b -r -n DEV: 17:07:57 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-20057) 06/10/25 _x86_64_ (8 CPU) 17:07:57 17:07:57 17:01:11 LINUX RESTART (8 CPU) 17:07:57 17:07:57 17:02:01 tps rtps wtps bread/s bwrtn/s 17:07:57 17:03:01 364.39 30.99 333.39 1040.76 81231.26 17:07:57 17:04:01 398.15 23.78 374.37 2718.61 202382.80 17:07:57 17:05:01 349.73 9.23 340.49 379.47 22457.94 17:07:57 17:06:01 52.13 0.28 51.85 23.73 6172.93 17:07:57 17:07:01 17.41 0.05 17.36 8.80 355.27 17:07:57 Average: 236.36 12.87 223.49 834.25 62518.16 17:07:57 17:07:57 17:02:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 17:07:57 17:03:01 30028980 31635872 2910240 8.84 72848 1843028 1499772 4.41 924444 1692508 190580 17:07:57 17:04:01 25363704 31426504 7575516 23.00 132408 6051344 4534488 13.34 1223948 5792624 1808 17:07:57 17:05:01 22807612 29489044 10131608 30.76 170408 6576404 9224688 27.14 3411860 6067296 63624 17:07:57 17:06:01 22956040 29563964 9983180 30.31 176524 6499520 9106944 26.79 3337156 5995048 360 17:07:57 17:07:01 23361924 29924648 9577296 29.08 176888 6459752 7391200 21.75 2996740 5947808 436 17:07:57 Average: 24903652 30408006 8035568 24.40 145815 5486010 6351418 18.69 2378830 5099057 51362 17:07:57 17:07:57 17:02:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 17:07:57 17:03:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:07:57 17:03:01 ens3 576.55 358.21 1728.63 83.94 0.00 0.00 0.00 0.00 17:07:57 17:03:01 lo 1.33 1.33 0.16 0.16 0.00 0.00 0.00 0.00 17:07:57 17:04:01 veth055b20d 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:07:57 17:04:01 veth193cd96 0.00 0.18 0.00 0.02 0.00 0.00 0.00 0.00 17:07:57 17:04:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:07:57 17:04:01 veth759d3a4 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:07:57 17:05:01 veth193cd96 0.45 0.67 0.05 1.08 0.00 0.00 0.00 0.00 17:07:57 17:05:01 docker0 35.48 52.76 3.25 300.41 0.00 0.00 0.00 0.00 17:07:57 17:05:01 ens3 2699.68 1549.48 37876.72 183.84 0.00 0.00 0.00 0.00 17:07:57 17:05:01 vethd501c83 0.00 0.37 0.00 0.02 0.00 0.00 0.00 0.00 17:07:57 17:06:01 veth193cd96 0.58 0.63 0.06 1.52 0.00 0.00 0.00 0.00 17:07:57 17:06:01 docker0 0.00 0.03 0.00 0.00 0.00 0.00 0.00 0.00 17:07:57 17:06:01 ens3 7.03 6.18 1.81 1.87 0.00 0.00 0.00 0.00 17:07:57 17:06:01 vethd501c83 0.00 0.05 0.00 0.00 0.00 0.00 0.00 0.00 17:07:57 17:07:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:07:57 17:07:01 ens3 15.33 14.48 10.20 15.09 0.00 0.00 0.00 0.00 17:07:57 17:07:01 vethd501c83 0.00 0.20 0.00 0.01 0.00 0.00 0.00 0.00 17:07:57 17:07:01 vethc65da91 5.23 7.45 0.84 0.99 0.00 0.00 0.00 0.00 17:07:57 Average: docker0 7.10 10.56 0.65 60.08 0.00 0.00 0.00 0.00 17:07:57 Average: ens3 530.39 304.31 7539.67 38.72 0.00 0.00 0.00 0.00 17:07:57 Average: vethd501c83 0.00 0.12 0.00 0.01 0.00 0.00 0.00 0.00 17:07:57 Average: vethc65da91 1.05 1.49 0.17 0.20 0.00 0.00 0.00 0.00 17:07:57 17:07:57 17:07:57 ---> sar -P ALL: 17:07:57 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-20057) 06/10/25 _x86_64_ (8 CPU) 17:07:57 17:07:57 17:01:11 LINUX RESTART (8 CPU) 17:07:57 17:07:57 17:02:01 CPU %user %nice %system %iowait %steal %idle 17:07:57 17:03:01 all 11.64 0.00 1.18 2.82 0.04 84.32 17:07:57 17:03:01 0 14.30 0.00 1.10 0.97 0.03 83.61 17:07:57 17:03:01 1 39.30 0.00 3.69 2.22 0.08 54.71 17:07:57 17:03:01 2 3.91 0.00 1.28 10.39 0.05 84.38 17:07:57 17:03:01 3 9.61 0.00 1.05 0.75 0.02 88.57 17:07:57 17:03:01 4 4.73 0.00 0.30 7.81 0.07 87.08 17:07:57 17:03:01 5 5.29 0.00 1.03 0.20 0.02 93.46 17:07:57 17:03:01 6 5.63 0.00 0.38 0.02 0.02 93.96 17:07:57 17:03:01 7 10.36 0.00 0.57 0.20 0.03 88.83 17:07:57 17:04:01 all 16.68 0.00 8.78 10.30 0.10 64.13 17:07:57 17:04:01 0 16.34 0.00 8.26 8.02 0.12 67.27 17:07:57 17:04:01 1 18.88 0.00 9.87 39.19 0.14 31.93 17:07:57 17:04:01 2 16.65 0.00 7.86 4.04 0.08 71.37 17:07:57 17:04:01 3 16.17 0.00 8.42 4.37 0.09 70.96 17:07:57 17:04:01 4 17.69 0.00 8.67 11.17 0.07 62.40 17:07:57 17:04:01 5 15.71 0.00 10.76 3.75 0.12 69.66 17:07:57 17:04:01 6 17.01 0.00 7.93 7.79 0.08 67.19 17:07:57 17:04:01 7 15.07 0.00 8.50 4.29 0.12 72.02 17:07:57 17:05:01 all 29.24 0.00 4.20 1.65 0.11 64.81 17:07:57 17:05:01 0 24.67 0.00 4.24 2.47 0.08 68.53 17:07:57 17:05:01 1 31.94 0.00 3.71 1.56 0.12 62.67 17:07:57 17:05:01 2 25.32 0.00 3.86 2.16 0.10 68.56 17:07:57 17:05:01 3 34.14 0.00 4.58 2.54 0.12 58.61 17:07:57 17:05:01 4 31.86 0.00 4.95 1.43 0.12 61.64 17:07:57 17:05:01 5 27.21 0.00 4.51 1.40 0.10 66.78 17:07:57 17:05:01 6 33.52 0.00 4.25 0.71 0.12 61.40 17:07:57 17:05:01 7 25.25 0.00 3.50 0.90 0.10 70.25 17:07:57 17:06:01 all 6.55 0.00 1.10 0.18 0.05 92.12 17:07:57 17:06:01 0 8.40 0.00 1.22 0.33 0.05 90.00 17:07:57 17:06:01 1 5.54 0.00 1.07 0.59 0.03 92.77 17:07:57 17:06:01 2 6.97 0.00 1.07 0.05 0.05 91.85 17:07:57 17:06:01 3 3.83 0.00 1.19 0.13 0.08 94.77 17:07:57 17:06:01 4 9.11 0.00 1.05 0.18 0.07 89.59 17:07:57 17:06:01 5 6.91 0.00 1.23 0.07 0.07 91.72 17:07:57 17:06:01 6 4.33 0.00 0.84 0.02 0.07 94.75 17:07:57 17:06:01 7 7.28 0.00 1.12 0.05 0.03 91.51 17:07:57 17:07:01 all 1.78 0.00 0.49 0.06 0.05 97.61 17:07:57 17:07:01 0 1.97 0.00 0.55 0.03 0.07 97.38 17:07:57 17:07:01 1 1.65 0.00 0.50 0.18 0.05 97.61 17:07:57 17:07:01 2 1.84 0.00 0.42 0.02 0.05 97.67 17:07:57 17:07:01 3 1.12 0.00 0.48 0.03 0.05 98.31 17:07:57 17:07:01 4 1.90 0.00 0.48 0.10 0.05 97.46 17:07:57 17:07:01 5 1.70 0.00 0.48 0.05 0.05 97.71 17:07:57 17:07:01 6 1.94 0.00 0.58 0.03 0.05 97.39 17:07:57 17:07:01 7 2.10 0.00 0.50 0.03 0.05 97.31 17:07:57 Average: all 13.14 0.00 3.13 2.98 0.07 80.67 17:07:57 Average: 0 13.12 0.00 3.06 2.35 0.07 81.41 17:07:57 Average: 1 19.46 0.00 3.74 8.62 0.08 68.09 17:07:57 Average: 2 10.89 0.00 2.88 3.33 0.07 82.84 17:07:57 Average: 3 12.93 0.00 3.12 1.56 0.07 82.32 17:07:57 Average: 4 13.02 0.00 3.07 4.12 0.07 79.72 17:07:57 Average: 5 11.33 0.00 3.58 1.08 0.07 83.94 17:07:57 Average: 6 12.44 0.00 2.78 1.69 0.07 83.02 17:07:57 Average: 7 12.00 0.00 2.82 1.09 0.07 84.03 17:07:57 17:07:57 17:07:57