Started by timer Running as SYSTEM [EnvInject] - Loading node environment variables. Building remotely on prd-ubuntu1804-docker-8c-8g-22423 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-pap-master-project-csit-pap [ssh-agent] Looking for ssh-agent implementation... [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) $ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-PyJk5Hh4qFUG/agent.2066 SSH_AGENT_PID=2068 [ssh-agent] Started. Running ssh-add (command line suppressed) Identity added: /w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_13368246422410356604.key (/w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_13368246422410356604.key) [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) The recommended git tool is: NONE using credential onap-jenkins-ssh Wiping out workspace first. Cloning the remote Git repository Cloning repository git://cloud.onap.org/mirror/policy/docker.git > git init /w/workspace/policy-pap-master-project-csit-pap # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git > git --version # timeout=10 > git --version # 'git version 2.17.1' using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 Avoid second fetch > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 Checking out Revision 8b99874d0fe646f509546f6b38b185b8f089ba50 (refs/remotes/origin/master) > git config core.sparsecheckout # timeout=10 > git checkout -f 8b99874d0fe646f509546f6b38b185b8f089ba50 # timeout=30 Commit message: "Add missing delete composition in CSIT" > git rev-list --no-walk ed38a50541249063daf2cfb00b312fb173adeace # timeout=10 provisioning config files... copy managed file [npmrc] to file:/home/jenkins/.npmrc copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins14714109271736062214.sh ---> python-tools-install.sh Setup pyenv: * system (set by /opt/pyenv/version) * 3.8.13 (set by /opt/pyenv/version) * 3.9.13 (set by /opt/pyenv/version) * 3.10.6 (set by /opt/pyenv/version) lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-wKOo lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-wKOo/bin to PATH Generating Requirements File Python 3.10.6 pip 25.1.1 from /tmp/venv-wKOo/lib/python3.10/site-packages/pip (python 3.10) appdirs==1.4.4 argcomplete==3.6.2 aspy.yaml==1.3.0 attrs==25.3.0 autopage==0.5.2 beautifulsoup4==4.13.4 boto3==1.38.40 botocore==1.38.40 bs4==0.0.2 cachetools==5.5.2 certifi==2025.6.15 cffi==1.17.1 cfgv==3.4.0 chardet==5.2.0 charset-normalizer==3.4.2 click==8.2.1 cliff==4.10.0 cmd2==2.6.1 cryptography==3.3.2 debtcollector==3.0.0 decorator==5.2.1 defusedxml==0.7.1 Deprecated==1.2.18 distlib==0.3.9 dnspython==2.7.0 docker==7.1.0 dogpile.cache==1.4.0 durationpy==0.10 email_validator==2.2.0 filelock==3.18.0 future==1.0.0 gitdb==4.0.12 GitPython==3.1.44 google-auth==2.40.3 httplib2==0.22.0 identify==2.6.12 idna==3.10 importlib-resources==1.5.0 iso8601==2.1.0 Jinja2==3.1.6 jmespath==1.0.1 jsonpatch==1.33 jsonpointer==3.0.0 jsonschema==4.24.0 jsonschema-specifications==2025.4.1 keystoneauth1==5.11.1 kubernetes==33.1.0 lftools==0.37.13 lxml==5.4.0 MarkupSafe==3.0.2 msgpack==1.1.1 multi_key_dict==2.0.3 munch==4.0.0 netaddr==1.3.0 niet==1.4.2 nodeenv==1.9.1 oauth2client==4.1.3 oauthlib==3.3.1 openstacksdk==4.6.0 os-client-config==2.1.0 os-service-types==1.7.0 osc-lib==4.0.2 oslo.config==9.8.0 oslo.context==6.0.0 oslo.i18n==6.5.1 oslo.log==7.1.0 oslo.serialization==5.7.0 oslo.utils==9.0.0 packaging==25.0 pbr==6.1.1 platformdirs==4.3.8 prettytable==3.16.0 psutil==7.0.0 pyasn1==0.6.1 pyasn1_modules==0.4.2 pycparser==2.22 pygerrit2==2.0.15 PyGithub==2.6.1 PyJWT==2.10.1 PyNaCl==1.5.0 pyparsing==2.4.7 pyperclip==1.9.0 pyrsistent==0.20.0 python-cinderclient==9.7.0 python-dateutil==2.9.0.post0 python-heatclient==4.2.0 python-jenkins==1.8.2 python-keystoneclient==5.6.0 python-magnumclient==4.8.1 python-openstackclient==8.1.0 python-swiftclient==4.8.0 PyYAML==6.0.2 referencing==0.36.2 requests==2.32.4 requests-oauthlib==2.0.0 requestsexceptions==1.4.0 rfc3986==2.0.0 rpds-py==0.25.1 rsa==4.9.1 ruamel.yaml==0.18.14 ruamel.yaml.clib==0.2.12 s3transfer==0.13.0 simplejson==3.20.1 six==1.17.0 smmap==5.0.2 soupsieve==2.7 stevedore==5.4.1 tabulate==0.9.0 toml==0.10.2 tomlkit==0.13.3 tqdm==4.67.1 typing_extensions==4.14.0 tzdata==2025.2 urllib3==1.26.20 virtualenv==20.31.2 wcwidth==0.2.13 websocket-client==1.8.0 wrapt==1.17.2 xdg==6.0.0 xmltodict==0.14.2 yq==3.4.3 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SET_JDK_VERSION=openjdk17 GIT_URL="git://cloud.onap.org/mirror" [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/sh /tmp/jenkins13084161960845157615.sh ---> update-java-alternatives.sh ---> Updating Java version ---> Ubuntu/Debian system detected update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode openjdk version "17.0.4" 2022-07-19 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/sh -xe /tmp/jenkins2899289906938887222.sh + /w/workspace/policy-pap-master-project-csit-pap/csit/run-project-csit.sh pap WARNING! Using --password via the CLI is insecure. Use --password-stdin. WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded docker: 'compose' is not a docker command. See 'docker --help' Docker Compose Plugin not installed. Installing now... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 60.2M 100 60.2M 0 0 75.8M 0 --:--:-- --:--:-- --:--:-- 75.8M Setting project configuration for: pap Configuring docker compose... Starting apex-pdp using postgres + Grafana/Prometheus policy-db-migrator Pulling simulator Pulling api Pulling apex-pdp Pulling postgres Pulling prometheus Pulling zookeeper Pulling pap Pulling kafka Pulling grafana Pulling da9db072f522 Pulling fs layer 96e38c8865ba Pulling fs layer 5e06c6bed798 Pulling fs layer 684be6598fc9 Pulling fs layer 684be6598fc9 Waiting 0d92cad902ba Pulling fs layer dcc0c3b2850c Pulling fs layer eb7cda286a15 Pulling fs layer eb7cda286a15 Waiting dcc0c3b2850c Waiting 0d92cad902ba Waiting da9db072f522 Pulling fs layer 6d64908bb8c7 Pulling fs layer 739d956095f0 Pulling fs layer 6ce075c32df1 Pulling fs layer 123d8160bc76 Pulling fs layer 6ff3b4b08cc9 Pulling fs layer be48959ad93c Pulling fs layer c70684a5e2f9 Pulling fs layer 739d956095f0 Waiting 123d8160bc76 Waiting 6ff3b4b08cc9 Waiting 6ce075c32df1 Waiting be48959ad93c Waiting c70684a5e2f9 Waiting 5e06c6bed798 Download complete da9db072f522 Pulling fs layer 96e38c8865ba Pulling fs layer e5d7009d9e55 Pulling fs layer 1ec5fb03eaee Pulling fs layer d3165a332ae3 Pulling fs layer c124ba1a8b26 Pulling fs layer 1ec5fb03eaee Waiting 6394804c2196 Pulling fs layer e5d7009d9e55 Waiting 6394804c2196 Waiting d3165a332ae3 Waiting da9db072f522 Downloading [> ] 48.06kB/3.624MB da9db072f522 Downloading [> ] 48.06kB/3.624MB da9db072f522 Downloading [> ] 48.06kB/3.624MB da9db072f522 Pulling fs layer e0a9246a993d Pulling fs layer 5179ab305f38 Pulling fs layer 18ce86a3284e Pulling fs layer 098efa8b34b7 Pulling fs layer da9db072f522 Downloading [> ] 48.06kB/3.624MB 614e034e242f Pulling fs layer e0a9246a993d Waiting 5179ab305f38 Waiting 18ce86a3284e Waiting 614e034e242f Waiting 098efa8b34b7 Waiting 96e38c8865ba Downloading [> ] 539.6kB/71.91MB 96e38c8865ba Downloading [> ] 539.6kB/71.91MB da9db072f522 Pulling fs layer 4ba79830ebce Pulling fs layer d223479d7367 Pulling fs layer ece604b40811 Pulling fs layer c01e672f2391 Pulling fs layer da9db072f522 Downloading [> ] 48.06kB/3.624MB 4ba79830ebce Waiting ece604b40811 Waiting c01e672f2391 Waiting d223479d7367 Waiting 684be6598fc9 Downloading [=> ] 3.001kB/127.5kB 684be6598fc9 Downloading [==================================================>] 127.5kB/127.5kB 684be6598fc9 Verifying Checksum 684be6598fc9 Download complete 2d429b9e73a6 Pulling fs layer 46eab5b44a35 Pulling fs layer c4d302cc468d Pulling fs layer 01e0882c90d9 Pulling fs layer 531ee2cf3c0c Pulling fs layer ed54a7dee1d8 Pulling fs layer 12c5c803443f Pulling fs layer 01e0882c90d9 Waiting e27c75a98748 Pulling fs layer e73cb4a42719 Pulling fs layer 46eab5b44a35 Waiting a83b68436f09 Pulling fs layer c4d302cc468d Waiting 531ee2cf3c0c Waiting 2d429b9e73a6 Waiting 787d6bee9571 Pulling fs layer e73cb4a42719 Waiting ed54a7dee1d8 Waiting 12c5c803443f Waiting e27c75a98748 Waiting a83b68436f09 Waiting 787d6bee9571 Waiting 13ff0988aaea Pulling fs layer 4b82842ab819 Pulling fs layer 7e568a0dc8fb Pulling fs layer 4b82842ab819 Waiting 7e568a0dc8fb Waiting 13ff0988aaea Waiting 1e017ebebdbd Pulling fs layer 55f2b468da67 Pulling fs layer 82bfc142787e Pulling fs layer 46baca71a4ef Pulling fs layer b0e0ef7895f4 Pulling fs layer c0c90eeb8aca Pulling fs layer 5cfb27c10ea5 Pulling fs layer 40a5eed61bb0 Pulling fs layer e040ea11fa10 Pulling fs layer 09d5a3f70313 Pulling fs layer 356f5c2c843b Pulling fs layer b0e0ef7895f4 Waiting c0c90eeb8aca Waiting 55f2b468da67 Waiting 1e017ebebdbd Waiting 82bfc142787e Waiting e040ea11fa10 Waiting 5cfb27c10ea5 Waiting 09d5a3f70313 Waiting 40a5eed61bb0 Waiting 9fa9226be034 Pulling fs layer 1617e25568b2 Pulling fs layer 6ac0e4adf315 Pulling fs layer f3b09c502777 Pulling fs layer 408012a7b118 Pulling fs layer 44986281b8b9 Pulling fs layer bf70c5107ab5 Pulling fs layer 1ccde423731d Pulling fs layer 7221d93db8a9 Pulling fs layer 7df673c7455d Pulling fs layer f3b09c502777 Waiting 408012a7b118 Waiting 44986281b8b9 Waiting bf70c5107ab5 Waiting 1ccde423731d Waiting 7221d93db8a9 Waiting 9fa9226be034 Waiting 7df673c7455d Waiting 6ac0e4adf315 Waiting 1617e25568b2 Waiting f18232174bc9 Pulling fs layer 9183b65e90ee Pulling fs layer 3f8d5c908dcc Pulling fs layer 30bb92ff0608 Pulling fs layer 807a2e881ecd Pulling fs layer 4a4d0948b0bf Pulling fs layer 04f6155c873d Pulling fs layer 85dde7dceb0a Pulling fs layer 7009d5001b77 Pulling fs layer 538deb30e80c Pulling fs layer 9183b65e90ee Waiting 3f8d5c908dcc Waiting 30bb92ff0608 Waiting 807a2e881ecd Waiting 4a4d0948b0bf Waiting 04f6155c873d Waiting 85dde7dceb0a Waiting 7009d5001b77 Waiting 538deb30e80c Waiting f18232174bc9 Waiting 0d92cad902ba Downloading [==================================================>] 1.148kB/1.148kB 0d92cad902ba Verifying Checksum 0d92cad902ba Download complete eca0188f477e Pulling fs layer e444bcd4d577 Pulling fs layer eabd8714fec9 Pulling fs layer 45fd2fec8a19 Pulling fs layer 8f10199ed94b Pulling fs layer f963a77d2726 Pulling fs layer f3a82e9f1761 Pulling fs layer 79161a3f5362 Pulling fs layer 9c266ba63f51 Pulling fs layer eca0188f477e Waiting 2e8a7df9c2ee Pulling fs layer 10f05dd8b1db Pulling fs layer e444bcd4d577 Waiting eabd8714fec9 Waiting 41dac8b43ba6 Pulling fs layer 45fd2fec8a19 Waiting 71a9f6a9ab4d Pulling fs layer da3ed5db7103 Pulling fs layer c955f6e31a04 Pulling fs layer 8f10199ed94b Waiting 2e8a7df9c2ee Waiting 10f05dd8b1db Waiting f963a77d2726 Waiting 41dac8b43ba6 Waiting f3a82e9f1761 Waiting 71a9f6a9ab4d Waiting da3ed5db7103 Waiting c955f6e31a04 Waiting 9c266ba63f51 Waiting 79161a3f5362 Waiting da9db072f522 Download complete da9db072f522 Download complete da9db072f522 Download complete da9db072f522 Download complete da9db072f522 Download complete da9db072f522 Extracting [> ] 65.54kB/3.624MB da9db072f522 Extracting [> ] 65.54kB/3.624MB da9db072f522 Extracting [> ] 65.54kB/3.624MB da9db072f522 Extracting [> ] 65.54kB/3.624MB da9db072f522 Extracting [> ] 65.54kB/3.624MB eb7cda286a15 Downloading [==================================================>] 1.119kB/1.119kB eb7cda286a15 Download complete dcc0c3b2850c Downloading [> ] 539.6kB/76.12MB 96e38c8865ba Downloading [=====> ] 8.109MB/71.91MB 96e38c8865ba Downloading [=====> ] 8.109MB/71.91MB 6d64908bb8c7 Downloading [> ] 539.6kB/71.86MB dcc0c3b2850c Downloading [=====> ] 8.109MB/76.12MB da9db072f522 Extracting [=====> ] 393.2kB/3.624MB da9db072f522 Extracting [=====> ] 393.2kB/3.624MB da9db072f522 Extracting [=====> ] 393.2kB/3.624MB da9db072f522 Extracting [=====> ] 393.2kB/3.624MB da9db072f522 Extracting [=====> ] 393.2kB/3.624MB 96e38c8865ba Downloading [================> ] 23.25MB/71.91MB 96e38c8865ba Downloading [================> ] 23.25MB/71.91MB 6d64908bb8c7 Downloading [===> ] 5.406MB/71.86MB da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB dcc0c3b2850c Downloading [===========> ] 16.76MB/76.12MB 96e38c8865ba Downloading [===========================> ] 40.01MB/71.91MB 96e38c8865ba Downloading [===========================> ] 40.01MB/71.91MB 6d64908bb8c7 Downloading [=========> ] 12.98MB/71.86MB da9db072f522 Pull complete da9db072f522 Pull complete da9db072f522 Pull complete da9db072f522 Pull complete da9db072f522 Pull complete dcc0c3b2850c Downloading [====================> ] 31.36MB/76.12MB 96e38c8865ba Downloading [=======================================> ] 56.77MB/71.91MB 96e38c8865ba Downloading [=======================================> ] 56.77MB/71.91MB 6d64908bb8c7 Downloading [=================> ] 24.87MB/71.86MB 96e38c8865ba Verifying Checksum 96e38c8865ba Download complete 96e38c8865ba Verifying Checksum 96e38c8865ba Download complete dcc0c3b2850c Downloading [===============================> ] 48.66MB/76.12MB 6d64908bb8c7 Downloading [===========================> ] 39.47MB/71.86MB 739d956095f0 Downloading [> ] 146.4kB/14.64MB dcc0c3b2850c Downloading [==========================================> ] 65.42MB/76.12MB 96e38c8865ba Extracting [> ] 557.1kB/71.91MB 96e38c8865ba Extracting [> ] 557.1kB/71.91MB 6d64908bb8c7 Downloading [======================================> ] 55.69MB/71.86MB 739d956095f0 Downloading [===================> ] 5.75MB/14.64MB dcc0c3b2850c Verifying Checksum dcc0c3b2850c Download complete 6ce075c32df1 Downloading [==================================================>] 1.071kB/1.071kB 6ce075c32df1 Verifying Checksum 123d8160bc76 Downloading [============================> ] 3.003kB/5.239kB 123d8160bc76 Downloading [==================================================>] 5.239kB/5.239kB 123d8160bc76 Verifying Checksum 123d8160bc76 Download complete 6d64908bb8c7 Verifying Checksum 6d64908bb8c7 Download complete 739d956095f0 Verifying Checksum 739d956095f0 Download complete 6ff3b4b08cc9 Downloading [==================================================>] 1.032kB/1.032kB 6ff3b4b08cc9 Download complete 96e38c8865ba Extracting [===> ] 4.456MB/71.91MB 96e38c8865ba Extracting [===> ] 4.456MB/71.91MB be48959ad93c Downloading [==================================================>] 1.033kB/1.033kB be48959ad93c Verifying Checksum be48959ad93c Download complete c70684a5e2f9 Downloading [=======> ] 3.002kB/19.52kB c70684a5e2f9 Download complete e5d7009d9e55 Download complete 1ec5fb03eaee Downloading [=> ] 3.001kB/127kB 1ec5fb03eaee Downloading [==================================================>] 127kB/127kB 1ec5fb03eaee Verifying Checksum 1ec5fb03eaee Download complete d3165a332ae3 Downloading [==================================================>] 1.328kB/1.328kB d3165a332ae3 Verifying Checksum d3165a332ae3 Download complete c124ba1a8b26 Downloading [> ] 539.6kB/91.87MB 6394804c2196 Downloading [==================================================>] 1.299kB/1.299kB 6394804c2196 Verifying Checksum 6394804c2196 Download complete e0a9246a993d Downloading [> ] 539.6kB/71.91MB 5179ab305f38 Downloading [==================================================>] 306B/306B 5179ab305f38 Verifying Checksum 5179ab305f38 Download complete 18ce86a3284e Downloading [> ] 539.6kB/182.3MB 96e38c8865ba Extracting [======> ] 9.47MB/71.91MB 96e38c8865ba Extracting [======> ] 9.47MB/71.91MB c124ba1a8b26 Downloading [======> ] 11.89MB/91.87MB 6d64908bb8c7 Extracting [> ] 557.1kB/71.86MB e0a9246a993d Downloading [======> ] 8.65MB/71.91MB 18ce86a3284e Downloading [=> ] 7.028MB/182.3MB 96e38c8865ba Extracting [=========> ] 13.93MB/71.91MB 96e38c8865ba Extracting [=========> ] 13.93MB/71.91MB c124ba1a8b26 Downloading [===============> ] 27.57MB/91.87MB e0a9246a993d Downloading [===============> ] 22.71MB/71.91MB 6d64908bb8c7 Extracting [===> ] 4.456MB/71.86MB 18ce86a3284e Downloading [===> ] 13.52MB/182.3MB 96e38c8865ba Extracting [============> ] 18.38MB/71.91MB 96e38c8865ba Extracting [============> ] 18.38MB/71.91MB c124ba1a8b26 Downloading [=======================> ] 42.71MB/91.87MB e0a9246a993d Downloading [=========================> ] 36.76MB/71.91MB 6d64908bb8c7 Extracting [======> ] 8.913MB/71.86MB 18ce86a3284e Downloading [=====> ] 19.46MB/182.3MB 96e38c8865ba Extracting [===============> ] 22.28MB/71.91MB 96e38c8865ba Extracting [===============> ] 22.28MB/71.91MB c124ba1a8b26 Downloading [===============================> ] 57.85MB/91.87MB e0a9246a993d Downloading [====================================> ] 52.44MB/71.91MB 6d64908bb8c7 Extracting [=========> ] 13.37MB/71.86MB 18ce86a3284e Downloading [========> ] 29.2MB/182.3MB 96e38c8865ba Extracting [==================> ] 26.18MB/71.91MB 96e38c8865ba Extracting [==================> ] 26.18MB/71.91MB c124ba1a8b26 Downloading [=======================================> ] 72.99MB/91.87MB e0a9246a993d Downloading [===============================================> ] 68.12MB/71.91MB 6d64908bb8c7 Extracting [=============> ] 19.5MB/71.86MB e0a9246a993d Verifying Checksum e0a9246a993d Download complete 098efa8b34b7 Downloading [==================================================>] 1.154kB/1.154kB 098efa8b34b7 Verifying Checksum 098efa8b34b7 Download complete 18ce86a3284e Downloading [===========> ] 42.71MB/182.3MB 614e034e242f Downloading [==================================================>] 1.126kB/1.126kB 614e034e242f Verifying Checksum 614e034e242f Download complete c124ba1a8b26 Downloading [=================================================> ] 90.29MB/91.87MB 96e38c8865ba Extracting [=====================> ] 31.2MB/71.91MB 96e38c8865ba Extracting [=====================> ] 31.2MB/71.91MB c124ba1a8b26 Verifying Checksum c124ba1a8b26 Download complete 4ba79830ebce Downloading [> ] 539.6kB/166.8MB d223479d7367 Downloading [> ] 80.82kB/6.742MB 18ce86a3284e Downloading [==============> ] 54.61MB/182.3MB 6d64908bb8c7 Extracting [================> ] 23.95MB/71.86MB e0a9246a993d Extracting [> ] 557.1kB/71.91MB 96e38c8865ba Extracting [=======================> ] 33.98MB/71.91MB 96e38c8865ba Extracting [=======================> ] 33.98MB/71.91MB 4ba79830ebce Downloading [=> ] 4.865MB/166.8MB d223479d7367 Downloading [=============================> ] 3.931MB/6.742MB 18ce86a3284e Downloading [===================> ] 69.75MB/182.3MB 6d64908bb8c7 Extracting [===================> ] 28.41MB/71.86MB e0a9246a993d Extracting [===> ] 4.456MB/71.91MB 96e38c8865ba Extracting [=========================> ] 37.32MB/71.91MB 96e38c8865ba Extracting [=========================> ] 37.32MB/71.91MB d223479d7367 Verifying Checksum d223479d7367 Download complete 18ce86a3284e Downloading [=======================> ] 86.51MB/182.3MB 4ba79830ebce Downloading [==> ] 8.65MB/166.8MB ece604b40811 Downloading [==================================================>] 303B/303B ece604b40811 Verifying Checksum ece604b40811 Download complete 6d64908bb8c7 Extracting [======================> ] 32.31MB/71.86MB e0a9246a993d Extracting [======> ] 8.913MB/71.91MB 96e38c8865ba Extracting [=============================> ] 41.78MB/71.91MB 96e38c8865ba Extracting [=============================> ] 41.78MB/71.91MB c01e672f2391 Downloading [> ] 539.6kB/263.6MB 18ce86a3284e Downloading [==========================> ] 98.4MB/182.3MB 4ba79830ebce Downloading [====> ] 15.14MB/166.8MB e0a9246a993d Extracting [=========> ] 13.93MB/71.91MB 6d64908bb8c7 Extracting [==========================> ] 37.88MB/71.86MB 96e38c8865ba Extracting [================================> ] 46.24MB/71.91MB 96e38c8865ba Extracting [================================> ] 46.24MB/71.91MB c01e672f2391 Downloading [=> ] 6.487MB/263.6MB 18ce86a3284e Downloading [===============================> ] 113.5MB/182.3MB 4ba79830ebce Downloading [========> ] 27.57MB/166.8MB e0a9246a993d Extracting [=============> ] 18.94MB/71.91MB 6d64908bb8c7 Extracting [=============================> ] 41.78MB/71.86MB 96e38c8865ba Extracting [==================================> ] 50.14MB/71.91MB 96e38c8865ba Extracting [==================================> ] 50.14MB/71.91MB c01e672f2391 Downloading [===> ] 16.22MB/263.6MB 18ce86a3284e Downloading [===================================> ] 128.7MB/182.3MB 4ba79830ebce Downloading [============> ] 43.25MB/166.8MB e0a9246a993d Extracting [================> ] 23.4MB/71.91MB 6d64908bb8c7 Extracting [===============================> ] 45.68MB/71.86MB c01e672f2391 Downloading [=====> ] 29.74MB/263.6MB 96e38c8865ba Extracting [=====================================> ] 53.48MB/71.91MB 96e38c8865ba Extracting [=====================================> ] 53.48MB/71.91MB 18ce86a3284e Downloading [======================================> ] 139MB/182.3MB 4ba79830ebce Downloading [=================> ] 58.93MB/166.8MB 6d64908bb8c7 Extracting [==================================> ] 49.58MB/71.86MB e0a9246a993d Extracting [==================> ] 27.3MB/71.91MB c01e672f2391 Downloading [========> ] 44.33MB/263.6MB 96e38c8865ba Extracting [=======================================> ] 57.38MB/71.91MB 96e38c8865ba Extracting [=======================================> ] 57.38MB/71.91MB 4ba79830ebce Downloading [======================> ] 74.61MB/166.8MB 18ce86a3284e Downloading [========================================> ] 148.1MB/182.3MB 6d64908bb8c7 Extracting [====================================> ] 52.92MB/71.86MB e0a9246a993d Extracting [====================> ] 30.08MB/71.91MB c01e672f2391 Downloading [===========> ] 59.47MB/263.6MB 96e38c8865ba Extracting [==========================================> ] 61.83MB/71.91MB 96e38c8865ba Extracting [==========================================> ] 61.83MB/71.91MB 4ba79830ebce Downloading [==========================> ] 89.21MB/166.8MB 18ce86a3284e Downloading [===========================================> ] 157.3MB/182.3MB e0a9246a993d Extracting [========================> ] 35.09MB/71.91MB c01e672f2391 Downloading [==============> ] 74.61MB/263.6MB 6d64908bb8c7 Extracting [=======================================> ] 57.38MB/71.86MB 96e38c8865ba Extracting [=============================================> ] 65.18MB/71.91MB 96e38c8865ba Extracting [=============================================> ] 65.18MB/71.91MB 4ba79830ebce Downloading [===============================> ] 104.9MB/166.8MB 18ce86a3284e Downloading [=============================================> ] 166MB/182.3MB c01e672f2391 Downloading [================> ] 87.05MB/263.6MB e0a9246a993d Extracting [===========================> ] 38.99MB/71.91MB 6d64908bb8c7 Extracting [===========================================> ] 62.39MB/71.86MB 4ba79830ebce Downloading [===================================> ] 118.4MB/166.8MB 96e38c8865ba Extracting [================================================> ] 69.07MB/71.91MB 96e38c8865ba Extracting [================================================> ] 69.07MB/71.91MB 18ce86a3284e Downloading [================================================> ] 175.7MB/182.3MB c01e672f2391 Downloading [===================> ] 101.1MB/263.6MB 18ce86a3284e Verifying Checksum 18ce86a3284e Download complete 6d64908bb8c7 Extracting [==============================================> ] 67.4MB/71.86MB e0a9246a993d Extracting [=============================> ] 42.89MB/71.91MB 4ba79830ebce Downloading [=======================================> ] 130.8MB/166.8MB 2d429b9e73a6 Downloading [> ] 293.8kB/29.13MB 96e38c8865ba Extracting [==================================================>] 71.91MB/71.91MB 96e38c8865ba Extracting [==================================================>] 71.91MB/71.91MB c01e672f2391 Downloading [=====================> ] 114.1MB/263.6MB e0a9246a993d Extracting [===============================> ] 45.68MB/71.91MB 6d64908bb8c7 Extracting [=================================================> ] 71.3MB/71.86MB 4ba79830ebce Downloading [==========================================> ] 141.7MB/166.8MB 96e38c8865ba Pull complete 96e38c8865ba Pull complete e5d7009d9e55 Extracting [==================================================>] 295B/295B 5e06c6bed798 Extracting [==================================================>] 296B/296B 5e06c6bed798 Extracting [==================================================>] 296B/296B e5d7009d9e55 Extracting [==================================================>] 295B/295B 2d429b9e73a6 Downloading [======> ] 3.538MB/29.13MB 6d64908bb8c7 Extracting [==================================================>] 71.86MB/71.86MB 6d64908bb8c7 Extracting [==================================================>] 71.86MB/71.86MB c01e672f2391 Downloading [=======================> ] 124.9MB/263.6MB 4ba79830ebce Downloading [==============================================> ] 155.7MB/166.8MB 2d429b9e73a6 Downloading [==========> ] 6.192MB/29.13MB 6d64908bb8c7 Pull complete e0a9246a993d Extracting [=================================> ] 47.91MB/71.91MB 739d956095f0 Extracting [> ] 163.8kB/14.64MB 5e06c6bed798 Pull complete 684be6598fc9 Extracting [============> ] 32.77kB/127.5kB 684be6598fc9 Extracting [==================================================>] 127.5kB/127.5kB 684be6598fc9 Extracting [==================================================>] 127.5kB/127.5kB e5d7009d9e55 Pull complete c01e672f2391 Downloading [=========================> ] 135.7MB/263.6MB 1ec5fb03eaee Extracting [============> ] 32.77kB/127kB 4ba79830ebce Verifying Checksum 4ba79830ebce Download complete 1ec5fb03eaee Extracting [==================================================>] 127kB/127kB 1ec5fb03eaee Extracting [==================================================>] 127kB/127kB 2d429b9e73a6 Downloading [======================> ] 12.98MB/29.13MB 46eab5b44a35 Downloading [==================================================>] 1.168kB/1.168kB 46eab5b44a35 Verifying Checksum 46eab5b44a35 Download complete e0a9246a993d Extracting [===================================> ] 51.25MB/71.91MB 739d956095f0 Extracting [==> ] 655.4kB/14.64MB c4d302cc468d Downloading [> ] 48.06kB/4.534MB c01e672f2391 Downloading [=============================> ] 155.2MB/263.6MB 684be6598fc9 Pull complete 0d92cad902ba Extracting [==================================================>] 1.148kB/1.148kB 2d429b9e73a6 Downloading [==========================================> ] 24.77MB/29.13MB 0d92cad902ba Extracting [==================================================>] 1.148kB/1.148kB 1ec5fb03eaee Pull complete d3165a332ae3 Extracting [==================================================>] 1.328kB/1.328kB c4d302cc468d Verifying Checksum c4d302cc468d Download complete d3165a332ae3 Extracting [==================================================>] 1.328kB/1.328kB e0a9246a993d Extracting [=====================================> ] 54.03MB/71.91MB 739d956095f0 Extracting [============> ] 3.768MB/14.64MB 4ba79830ebce Extracting [> ] 557.1kB/166.8MB 2d429b9e73a6 Verifying Checksum 2d429b9e73a6 Download complete 01e0882c90d9 Downloading [> ] 15.3kB/1.447MB 531ee2cf3c0c Downloading [> ] 80.83kB/8.066MB c01e672f2391 Downloading [================================> ] 170.3MB/263.6MB 01e0882c90d9 Verifying Checksum 01e0882c90d9 Download complete ed54a7dee1d8 Downloading [> ] 15.3kB/1.196MB ed54a7dee1d8 Verifying Checksum ed54a7dee1d8 Download complete 12c5c803443f Downloading [==================================================>] 116B/116B 12c5c803443f Verifying Checksum 12c5c803443f Download complete e0a9246a993d Extracting [=======================================> ] 56.82MB/71.91MB 739d956095f0 Extracting [=================> ] 5.243MB/14.64MB e27c75a98748 Downloading [===============================================> ] 3.011kB/3.144kB e27c75a98748 Downloading [==================================================>] 3.144kB/3.144kB e27c75a98748 Verifying Checksum e27c75a98748 Download complete 2d429b9e73a6 Extracting [> ] 294.9kB/29.13MB 4ba79830ebce Extracting [> ] 2.785MB/166.8MB 531ee2cf3c0c Downloading [=================================> ] 5.406MB/8.066MB c01e672f2391 Downloading [==================================> ] 183.8MB/263.6MB e73cb4a42719 Downloading [> ] 539.6kB/109.1MB 0d92cad902ba Pull complete d3165a332ae3 Pull complete 531ee2cf3c0c Download complete a83b68436f09 Downloading [===============> ] 3.011kB/9.919kB a83b68436f09 Downloading [==================================================>] 9.919kB/9.919kB a83b68436f09 Verifying Checksum a83b68436f09 Download complete 787d6bee9571 Downloading [==================================================>] 127B/127B 787d6bee9571 Verifying Checksum 787d6bee9571 Download complete 13ff0988aaea Downloading [==================================================>] 167B/167B 13ff0988aaea Verifying Checksum 13ff0988aaea Download complete 739d956095f0 Extracting [=========================> ] 7.373MB/14.64MB 2d429b9e73a6 Extracting [=====> ] 3.244MB/29.13MB e0a9246a993d Extracting [=========================================> ] 59.6MB/71.91MB 4ba79830ebce Extracting [==> ] 7.799MB/166.8MB 4b82842ab819 Downloading [===========================> ] 3.011kB/5.415kB 4b82842ab819 Download complete 7e568a0dc8fb Downloading [==================================================>] 184B/184B 7e568a0dc8fb Verifying Checksum 7e568a0dc8fb Download complete c01e672f2391 Downloading [=====================================> ] 198.4MB/263.6MB e73cb4a42719 Downloading [===> ] 8.109MB/109.1MB 1e017ebebdbd Downloading [> ] 375.7kB/37.19MB c124ba1a8b26 Extracting [> ] 557.1kB/91.87MB dcc0c3b2850c Extracting [> ] 557.1kB/76.12MB 4ba79830ebce Extracting [====> ] 13.93MB/166.8MB e0a9246a993d Extracting [===========================================> ] 62.95MB/71.91MB 739d956095f0 Extracting [=============================> ] 8.52MB/14.64MB 2d429b9e73a6 Extracting [===========> ] 6.488MB/29.13MB c01e672f2391 Downloading [=======================================> ] 210.9MB/263.6MB e73cb4a42719 Downloading [========> ] 18.38MB/109.1MB 1e017ebebdbd Downloading [=====> ] 3.767MB/37.19MB c124ba1a8b26 Extracting [====> ] 8.356MB/91.87MB dcc0c3b2850c Extracting [====> ] 7.242MB/76.12MB 4ba79830ebce Extracting [=====> ] 18.94MB/166.8MB 739d956095f0 Extracting [====================================> ] 10.81MB/14.64MB e0a9246a993d Extracting [==============================================> ] 66.29MB/71.91MB 2d429b9e73a6 Extracting [================> ] 9.437MB/29.13MB c01e672f2391 Downloading [==========================================> ] 226MB/263.6MB e73cb4a42719 Downloading [============> ] 27.03MB/109.1MB 1e017ebebdbd Downloading [===========> ] 8.289MB/37.19MB c124ba1a8b26 Extracting [========> ] 15.04MB/91.87MB dcc0c3b2850c Extracting [========> ] 13.37MB/76.12MB 4ba79830ebce Extracting [=======> ] 26.18MB/166.8MB 739d956095f0 Extracting [========================================> ] 11.8MB/14.64MB e0a9246a993d Extracting [================================================> ] 69.07MB/71.91MB e73cb4a42719 Downloading [==================> ] 41.09MB/109.1MB c01e672f2391 Downloading [=============================================> ] 237.9MB/263.6MB 1e017ebebdbd Downloading [===================> ] 14.32MB/37.19MB 2d429b9e73a6 Extracting [==================> ] 10.91MB/29.13MB c124ba1a8b26 Extracting [==========> ] 20.05MB/91.87MB 739d956095f0 Extracting [==================================================>] 14.64MB/14.64MB dcc0c3b2850c Extracting [===========> ] 17.83MB/76.12MB 4ba79830ebce Extracting [=========> ] 30.08MB/166.8MB e73cb4a42719 Downloading [=======================> ] 51.36MB/109.1MB 739d956095f0 Pull complete c01e672f2391 Downloading [===============================================> ] 248.7MB/263.6MB 6ce075c32df1 Extracting [==================================================>] 1.071kB/1.071kB 6ce075c32df1 Extracting [==================================================>] 1.071kB/1.071kB e0a9246a993d Extracting [=================================================> ] 71.3MB/71.91MB 1e017ebebdbd Downloading [================================> ] 24.12MB/37.19MB 2d429b9e73a6 Extracting [======================> ] 13.27MB/29.13MB c124ba1a8b26 Extracting [===============> ] 27.85MB/91.87MB dcc0c3b2850c Extracting [===============> ] 23.4MB/76.12MB 4ba79830ebce Extracting [==========> ] 36.21MB/166.8MB e0a9246a993d Extracting [==================================================>] 71.91MB/71.91MB c01e672f2391 Downloading [=================================================> ] 261.1MB/263.6MB e73cb4a42719 Downloading [=============================> ] 65.42MB/109.1MB 1e017ebebdbd Downloading [=======================================> ] 29.39MB/37.19MB 2d429b9e73a6 Extracting [============================> ] 16.52MB/29.13MB c01e672f2391 Verifying Checksum c01e672f2391 Download complete c124ba1a8b26 Extracting [==================> ] 33.98MB/91.87MB dcc0c3b2850c Extracting [===================> ] 30.08MB/76.12MB 4ba79830ebce Extracting [=============> ] 44.56MB/166.8MB 1e017ebebdbd Verifying Checksum 1e017ebebdbd Download complete 55f2b468da67 Downloading [> ] 539.6kB/257.9MB e0a9246a993d Pull complete 6ce075c32df1 Pull complete 5179ab305f38 Extracting [==================================================>] 306B/306B 82bfc142787e Downloading [> ] 97.22kB/8.613MB 5179ab305f38 Extracting [==================================================>] 306B/306B 123d8160bc76 Extracting [==================================================>] 5.239kB/5.239kB 123d8160bc76 Extracting [==================================================>] 5.239kB/5.239kB e73cb4a42719 Downloading [==================================> ] 74.61MB/109.1MB 2d429b9e73a6 Extracting [================================> ] 18.87MB/29.13MB c124ba1a8b26 Extracting [======================> ] 40.67MB/91.87MB dcc0c3b2850c Extracting [=======================> ] 35.09MB/76.12MB 4ba79830ebce Extracting [===============> ] 51.25MB/166.8MB 55f2b468da67 Downloading [=> ] 8.65MB/257.9MB 82bfc142787e Verifying Checksum 82bfc142787e Download complete 1e017ebebdbd Extracting [> ] 393.2kB/37.19MB e73cb4a42719 Downloading [=======================================> ] 85.97MB/109.1MB 46baca71a4ef Downloading [========> ] 3.01kB/18.11kB 46baca71a4ef Downloading [==================================================>] 18.11kB/18.11kB 46baca71a4ef Verifying Checksum 2d429b9e73a6 Extracting [====================================> ] 21.23MB/29.13MB 46baca71a4ef Download complete c124ba1a8b26 Extracting [==========================> ] 48.46MB/91.87MB b0e0ef7895f4 Downloading [> ] 375.7kB/37.01MB 4ba79830ebce Extracting [==================> ] 60.16MB/166.8MB dcc0c3b2850c Extracting [===========================> ] 41.22MB/76.12MB 55f2b468da67 Downloading [====> ] 21.63MB/257.9MB 5179ab305f38 Pull complete 123d8160bc76 Pull complete 1e017ebebdbd Extracting [====> ] 3.539MB/37.19MB e73cb4a42719 Downloading [=============================================> ] 98.94MB/109.1MB 6ff3b4b08cc9 Extracting [==================================================>] 1.032kB/1.032kB 6ff3b4b08cc9 Extracting [==================================================>] 1.032kB/1.032kB 2d429b9e73a6 Extracting [=========================================> ] 23.89MB/29.13MB c124ba1a8b26 Extracting [=============================> ] 54.03MB/91.87MB dcc0c3b2850c Extracting [==============================> ] 46.24MB/76.12MB b0e0ef7895f4 Downloading [=====> ] 3.767MB/37.01MB 4ba79830ebce Extracting [====================> ] 66.85MB/166.8MB 55f2b468da67 Downloading [======> ] 31.36MB/257.9MB e73cb4a42719 Verifying Checksum e73cb4a42719 Download complete c0c90eeb8aca Downloading [==================================================>] 1.105kB/1.105kB c0c90eeb8aca Verifying Checksum c0c90eeb8aca Download complete 1e017ebebdbd Extracting [========> ] 6.291MB/37.19MB 18ce86a3284e Extracting [> ] 557.1kB/182.3MB 5cfb27c10ea5 Downloading [==================================================>] 852B/852B 5cfb27c10ea5 Verifying Checksum 5cfb27c10ea5 Download complete c124ba1a8b26 Extracting [=================================> ] 61.83MB/91.87MB dcc0c3b2850c Extracting [===============================> ] 48.46MB/76.12MB 6ff3b4b08cc9 Pull complete be48959ad93c Extracting [==================================================>] 1.033kB/1.033kB b0e0ef7895f4 Downloading [============> ] 9.043MB/37.01MB 40a5eed61bb0 Downloading [==================================================>] 98B/98B 40a5eed61bb0 Verifying Checksum 40a5eed61bb0 Download complete 4ba79830ebce Extracting [=====================> ] 71.86MB/166.8MB be48959ad93c Extracting [==================================================>] 1.033kB/1.033kB 55f2b468da67 Downloading [========> ] 45.96MB/257.9MB 2d429b9e73a6 Extracting [==========================================> ] 24.77MB/29.13MB e040ea11fa10 Downloading [==================================================>] 173B/173B e040ea11fa10 Verifying Checksum e040ea11fa10 Download complete 1e017ebebdbd Extracting [===========> ] 8.651MB/37.19MB 18ce86a3284e Extracting [=> ] 7.242MB/182.3MB c124ba1a8b26 Extracting [====================================> ] 66.85MB/91.87MB dcc0c3b2850c Extracting [====================================> ] 55.15MB/76.12MB b0e0ef7895f4 Downloading [====================> ] 15.07MB/37.01MB 4ba79830ebce Extracting [=======================> ] 77.99MB/166.8MB 55f2b468da67 Downloading [===========> ] 58.93MB/257.9MB 2d429b9e73a6 Extracting [=============================================> ] 26.54MB/29.13MB 09d5a3f70313 Downloading [> ] 539.6kB/109.2MB 1e017ebebdbd Extracting [===============> ] 11.4MB/37.19MB 18ce86a3284e Extracting [===> ] 13.93MB/182.3MB c124ba1a8b26 Extracting [=======================================> ] 71.86MB/91.87MB dcc0c3b2850c Extracting [========================================> ] 62.39MB/76.12MB b0e0ef7895f4 Downloading [================================> ] 23.74MB/37.01MB 4ba79830ebce Extracting [========================> ] 82.44MB/166.8MB 55f2b468da67 Downloading [============> ] 67.04MB/257.9MB 09d5a3f70313 Downloading [=> ] 3.243MB/109.2MB 18ce86a3284e Extracting [=====> ] 18.94MB/182.3MB 2d429b9e73a6 Extracting [================================================> ] 28.02MB/29.13MB 1e017ebebdbd Extracting [===================> ] 14.55MB/37.19MB be48959ad93c Pull complete c124ba1a8b26 Extracting [==========================================> ] 78.54MB/91.87MB c70684a5e2f9 Extracting [==================================================>] 19.52kB/19.52kB c70684a5e2f9 Extracting [==================================================>] 19.52kB/19.52kB dcc0c3b2850c Extracting [=============================================> ] 68.52MB/76.12MB b0e0ef7895f4 Downloading [==============================================> ] 34.29MB/37.01MB 55f2b468da67 Downloading [===============> ] 79.48MB/257.9MB b0e0ef7895f4 Verifying Checksum b0e0ef7895f4 Download complete 4ba79830ebce Extracting [=========================> ] 85.79MB/166.8MB 09d5a3f70313 Downloading [=====> ] 11.89MB/109.2MB 356f5c2c843b Downloading [=========================================> ] 3.011kB/3.623kB 356f5c2c843b Downloading [==================================================>] 3.623kB/3.623kB 356f5c2c843b Verifying Checksum 356f5c2c843b Download complete 18ce86a3284e Extracting [======> ] 22.28MB/182.3MB 9fa9226be034 Downloading [> ] 15.3kB/783kB 1e017ebebdbd Extracting [========================> ] 18.09MB/37.19MB c124ba1a8b26 Extracting [=============================================> ] 84.12MB/91.87MB dcc0c3b2850c Extracting [================================================> ] 73.53MB/76.12MB 9fa9226be034 Downloading [==================================================>] 783kB/783kB 9fa9226be034 Verifying Checksum 9fa9226be034 Download complete 9fa9226be034 Extracting [==> ] 32.77kB/783kB 1617e25568b2 Downloading [=> ] 15.3kB/480.9kB 55f2b468da67 Downloading [=================> ] 89.75MB/257.9MB 09d5a3f70313 Downloading [=========> ] 20.54MB/109.2MB 2d429b9e73a6 Extracting [================================================> ] 28.31MB/29.13MB 1617e25568b2 Downloading [==================================================>] 480.9kB/480.9kB 1617e25568b2 Verifying Checksum 1617e25568b2 Download complete 18ce86a3284e Extracting [=======> ] 27.85MB/182.3MB 4ba79830ebce Extracting [==========================> ] 88.57MB/166.8MB 1e017ebebdbd Extracting [============================> ] 21.23MB/37.19MB c124ba1a8b26 Extracting [===============================================> ] 88.01MB/91.87MB dcc0c3b2850c Extracting [==================================================>] 76.12MB/76.12MB 6ac0e4adf315 Downloading [> ] 539.6kB/62.07MB 9fa9226be034 Extracting [=======================> ] 360.4kB/783kB 09d5a3f70313 Downloading [==============> ] 32.44MB/109.2MB 55f2b468da67 Downloading [====================> ] 104.3MB/257.9MB 9fa9226be034 Extracting [==================================================>] 783kB/783kB 2d429b9e73a6 Extracting [=================================================> ] 28.9MB/29.13MB dcc0c3b2850c Pull complete c70684a5e2f9 Pull complete c124ba1a8b26 Extracting [==================================================>] 91.87MB/91.87MB eb7cda286a15 Extracting [==================================================>] 1.119kB/1.119kB eb7cda286a15 Extracting [==================================================>] 1.119kB/1.119kB 18ce86a3284e Extracting [========> ] 32.31MB/182.3MB policy-db-migrator Pulled 1e017ebebdbd Extracting [===============================> ] 23.2MB/37.19MB 2d429b9e73a6 Extracting [==================================================>] 29.13MB/29.13MB c124ba1a8b26 Pull complete 4ba79830ebce Extracting [===========================> ] 90.8MB/166.8MB 6394804c2196 Extracting [==================================================>] 1.299kB/1.299kB 6394804c2196 Extracting [==================================================>] 1.299kB/1.299kB 55f2b468da67 Downloading [======================> ] 114.1MB/257.9MB 6ac0e4adf315 Downloading [==> ] 3.243MB/62.07MB 09d5a3f70313 Downloading [==================> ] 40.55MB/109.2MB 9fa9226be034 Pull complete 1617e25568b2 Extracting [===> ] 32.77kB/480.9kB 18ce86a3284e Extracting [==========> ] 37.88MB/182.3MB 2d429b9e73a6 Pull complete 46eab5b44a35 Extracting [==================================================>] 1.168kB/1.168kB 46eab5b44a35 Extracting [==================================================>] 1.168kB/1.168kB 1e017ebebdbd Extracting [===================================> ] 26.35MB/37.19MB eb7cda286a15 Pull complete 4ba79830ebce Extracting [===========================> ] 92.47MB/166.8MB api Pulled 6394804c2196 Pull complete 55f2b468da67 Downloading [========================> ] 124.4MB/257.9MB 6ac0e4adf315 Downloading [======> ] 8.109MB/62.07MB 09d5a3f70313 Downloading [========================> ] 53.53MB/109.2MB pap Pulled 1617e25568b2 Extracting [==================================> ] 327.7kB/480.9kB 18ce86a3284e Extracting [===========> ] 43.45MB/182.3MB 1e017ebebdbd Extracting [========================================> ] 29.88MB/37.19MB 4ba79830ebce Extracting [============================> ] 94.7MB/166.8MB 55f2b468da67 Downloading [===========================> ] 139.5MB/257.9MB 6ac0e4adf315 Downloading [==============> ] 17.84MB/62.07MB 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB 09d5a3f70313 Downloading [=============================> ] 63.8MB/109.2MB 46eab5b44a35 Pull complete 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB c4d302cc468d Extracting [> ] 65.54kB/4.534MB 18ce86a3284e Extracting [==============> ] 51.25MB/182.3MB 4ba79830ebce Extracting [=============================> ] 97.48MB/166.8MB 55f2b468da67 Downloading [=============================> ] 153.5MB/257.9MB 6ac0e4adf315 Downloading [========================> ] 30.82MB/62.07MB 1e017ebebdbd Extracting [=============================================> ] 33.82MB/37.19MB 09d5a3f70313 Downloading [==================================> ] 76.23MB/109.2MB 18ce86a3284e Extracting [===============> ] 57.93MB/182.3MB 1617e25568b2 Pull complete c4d302cc468d Extracting [===> ] 327.7kB/4.534MB 55f2b468da67 Downloading [================================> ] 166.5MB/257.9MB 6ac0e4adf315 Downloading [===================================> ] 43.79MB/62.07MB 09d5a3f70313 Downloading [========================================> ] 89.21MB/109.2MB 4ba79830ebce Extracting [==============================> ] 101.4MB/166.8MB 1e017ebebdbd Extracting [===============================================> ] 35MB/37.19MB 18ce86a3284e Extracting [==================> ] 66.85MB/182.3MB c4d302cc468d Extracting [==============================================> ] 4.26MB/4.534MB 1e017ebebdbd Extracting [==================================================>] 37.19MB/37.19MB 55f2b468da67 Downloading [==================================> ] 178.4MB/257.9MB 6ac0e4adf315 Downloading [============================================> ] 55.69MB/62.07MB 09d5a3f70313 Downloading [===============================================> ] 102.7MB/109.2MB 4ba79830ebce Extracting [===============================> ] 104.2MB/166.8MB c4d302cc468d Extracting [==================================================>] 4.534MB/4.534MB 09d5a3f70313 Verifying Checksum 09d5a3f70313 Download complete 1e017ebebdbd Pull complete 18ce86a3284e Extracting [====================> ] 76.32MB/182.3MB 6ac0e4adf315 Verifying Checksum 6ac0e4adf315 Download complete f3b09c502777 Downloading [> ] 539.6kB/56.52MB 55f2b468da67 Downloading [====================================> ] 188.2MB/257.9MB c4d302cc468d Pull complete 01e0882c90d9 Extracting [=> ] 32.77kB/1.447MB 408012a7b118 Downloading [==================================================>] 637B/637B 408012a7b118 Download complete 4ba79830ebce Extracting [===============================> ] 105.8MB/166.8MB 44986281b8b9 Downloading [=====================================> ] 3.011kB/4.022kB 44986281b8b9 Verifying Checksum 44986281b8b9 Download complete bf70c5107ab5 Downloading [==================================================>] 1.44kB/1.44kB bf70c5107ab5 Verifying Checksum bf70c5107ab5 Download complete 18ce86a3284e Extracting [======================> ] 80.22MB/182.3MB 1ccde423731d Downloading [==> ] 3.01kB/61.44kB 1ccde423731d Downloading [==================================================>] 61.44kB/61.44kB 1ccde423731d Verifying Checksum 1ccde423731d Download complete 7221d93db8a9 Downloading [==================================================>] 100B/100B 7221d93db8a9 Verifying Checksum 7221d93db8a9 Download complete 6ac0e4adf315 Extracting [> ] 557.1kB/62.07MB 55f2b468da67 Downloading [=======================================> ] 206MB/257.9MB 7df673c7455d Downloading [==================================================>] 694B/694B 7df673c7455d Verifying Checksum 7df673c7455d Download complete f3b09c502777 Downloading [=========> ] 10.27MB/56.52MB 01e0882c90d9 Extracting [==========> ] 294.9kB/1.447MB f18232174bc9 Downloading [> ] 48.06kB/3.642MB 4ba79830ebce Extracting [================================> ] 109.2MB/166.8MB 01e0882c90d9 Extracting [==================================================>] 1.447MB/1.447MB 18ce86a3284e Extracting [========================> ] 88.01MB/182.3MB 01e0882c90d9 Pull complete 55f2b468da67 Downloading [==========================================> ] 220.1MB/257.9MB f3b09c502777 Downloading [=====================> ] 23.79MB/56.52MB 531ee2cf3c0c Extracting [> ] 98.3kB/8.066MB 6ac0e4adf315 Extracting [===> ] 3.899MB/62.07MB f18232174bc9 Verifying Checksum f18232174bc9 Download complete f18232174bc9 Extracting [> ] 65.54kB/3.642MB 4ba79830ebce Extracting [=================================> ] 113.1MB/166.8MB 9183b65e90ee Downloading [==================================================>] 141B/141B 9183b65e90ee Verifying Checksum 9183b65e90ee Download complete 3f8d5c908dcc Downloading [> ] 48.06kB/3.524MB 18ce86a3284e Extracting [==========================> ] 96.37MB/182.3MB 55f2b468da67 Downloading [=============================================> ] 234.1MB/257.9MB 531ee2cf3c0c Extracting [=> ] 294.9kB/8.066MB 6ac0e4adf315 Extracting [=====> ] 6.685MB/62.07MB f3b09c502777 Downloading [=================================> ] 37.31MB/56.52MB 3f8d5c908dcc Downloading [==================================================>] 3.524MB/3.524MB 3f8d5c908dcc Verifying Checksum 3f8d5c908dcc Download complete f18232174bc9 Extracting [=====> ] 393.2kB/3.642MB 30bb92ff0608 Downloading [> ] 97.22kB/8.735MB 4ba79830ebce Extracting [==================================> ] 115.9MB/166.8MB 18ce86a3284e Extracting [===========================> ] 101.4MB/182.3MB 55f2b468da67 Downloading [================================================> ] 249.2MB/257.9MB 531ee2cf3c0c Extracting [========================> ] 3.932MB/8.066MB 6ac0e4adf315 Extracting [=======> ] 9.47MB/62.07MB f3b09c502777 Downloading [=============================================> ] 51.9MB/56.52MB 30bb92ff0608 Downloading [======================================> ] 6.684MB/8.735MB f18232174bc9 Extracting [============================================> ] 3.211MB/3.642MB 55f2b468da67 Verifying Checksum 55f2b468da67 Download complete f18232174bc9 Extracting [==================================================>] 3.642MB/3.642MB 4ba79830ebce Extracting [===================================> ] 118.1MB/166.8MB f3b09c502777 Download complete 18ce86a3284e Extracting [=============================> ] 106.4MB/182.3MB 30bb92ff0608 Verifying Checksum 30bb92ff0608 Download complete 807a2e881ecd Downloading [==> ] 3.01kB/58.07kB 4a4d0948b0bf Downloading [=====> ] 3.01kB/27.78kB 4a4d0948b0bf Downloading [==================================================>] 27.78kB/27.78kB 807a2e881ecd Download complete 4a4d0948b0bf Verifying Checksum 4a4d0948b0bf Download complete f18232174bc9 Pull complete 9183b65e90ee Extracting [==================================================>] 141B/141B 7009d5001b77 Downloading [============> ] 3.01kB/11.92kB 7009d5001b77 Downloading [==================================================>] 11.92kB/11.92kB 7009d5001b77 Verifying Checksum 7009d5001b77 Download complete 04f6155c873d Downloading [> ] 539.6kB/107.3MB 9183b65e90ee Extracting [==================================================>] 141B/141B 531ee2cf3c0c Extracting [====================================> ] 5.898MB/8.066MB 85dde7dceb0a Downloading [> ] 539.6kB/63.48MB 538deb30e80c Downloading [==================================================>] 1.225kB/1.225kB 538deb30e80c Verifying Checksum 538deb30e80c Download complete 6ac0e4adf315 Extracting [=========> ] 12.26MB/62.07MB eca0188f477e Downloading [> ] 375.7kB/37.17MB 4ba79830ebce Extracting [====================================> ] 120.3MB/166.8MB 55f2b468da67 Extracting [> ] 557.1kB/257.9MB 18ce86a3284e Extracting [==============================> ] 112MB/182.3MB 531ee2cf3c0c Extracting [================================================> ] 7.766MB/8.066MB 85dde7dceb0a Downloading [======> ] 8.65MB/63.48MB 04f6155c873d Downloading [====> ] 10.27MB/107.3MB 531ee2cf3c0c Extracting [==================================================>] 8.066MB/8.066MB eca0188f477e Downloading [========> ] 6.405MB/37.17MB 6ac0e4adf315 Extracting [============> ] 15.6MB/62.07MB 55f2b468da67 Extracting [=> ] 9.47MB/257.9MB 18ce86a3284e Extracting [================================> ] 119.8MB/182.3MB 4ba79830ebce Extracting [=====================================> ] 124.2MB/166.8MB 9183b65e90ee Pull complete 85dde7dceb0a Downloading [===============> ] 19.46MB/63.48MB 3f8d5c908dcc Extracting [> ] 65.54kB/3.524MB 04f6155c873d Downloading [=========> ] 21.09MB/107.3MB 531ee2cf3c0c Pull complete ed54a7dee1d8 Extracting [=> ] 32.77kB/1.196MB eca0188f477e Downloading [===============> ] 11.68MB/37.17MB 6ac0e4adf315 Extracting [==============> ] 17.83MB/62.07MB 55f2b468da67 Extracting [===> ] 18.94MB/257.9MB 4ba79830ebce Extracting [=====================================> ] 126.5MB/166.8MB 18ce86a3284e Extracting [==================================> ] 127.6MB/182.3MB 85dde7dceb0a Downloading [==========================> ] 33.52MB/63.48MB 04f6155c873d Downloading [================> ] 35.14MB/107.3MB eca0188f477e Downloading [======================> ] 16.96MB/37.17MB 3f8d5c908dcc Extracting [====> ] 327.7kB/3.524MB ed54a7dee1d8 Extracting [============> ] 294.9kB/1.196MB 6ac0e4adf315 Extracting [==================> ] 22.84MB/62.07MB 55f2b468da67 Extracting [====> ] 21.73MB/257.9MB 4ba79830ebce Extracting [======================================> ] 128.1MB/166.8MB 18ce86a3284e Extracting [====================================> ] 132MB/182.3MB 85dde7dceb0a Downloading [==================================> ] 44.33MB/63.48MB 04f6155c873d Downloading [=====================> ] 46.5MB/107.3MB ed54a7dee1d8 Extracting [==================================================>] 1.196MB/1.196MB ed54a7dee1d8 Extracting [==================================================>] 1.196MB/1.196MB 3f8d5c908dcc Extracting [=============> ] 983kB/3.524MB eca0188f477e Downloading [===============================> ] 23.74MB/37.17MB 55f2b468da67 Extracting [====> ] 23.95MB/257.9MB 4ba79830ebce Extracting [======================================> ] 129.8MB/166.8MB 18ce86a3284e Extracting [=====================================> ] 138.1MB/182.3MB 85dde7dceb0a Downloading [============================================> ] 56.77MB/63.48MB 04f6155c873d Downloading [============================> ] 61.09MB/107.3MB 3f8d5c908dcc Extracting [==================================================>] 3.524MB/3.524MB 6ac0e4adf315 Extracting [===================> ] 23.95MB/62.07MB 3f8d5c908dcc Extracting [==================================================>] 3.524MB/3.524MB 85dde7dceb0a Verifying Checksum 85dde7dceb0a Download complete eca0188f477e Downloading [=========================================> ] 30.9MB/37.17MB e444bcd4d577 Downloading [==================================================>] 279B/279B e444bcd4d577 Verifying Checksum e444bcd4d577 Download complete 04f6155c873d Downloading [===================================> ] 76.77MB/107.3MB 4ba79830ebce Extracting [=======================================> ] 131.5MB/166.8MB eca0188f477e Verifying Checksum eca0188f477e Download complete 18ce86a3284e Extracting [=======================================> ] 144.3MB/182.3MB eabd8714fec9 Downloading [> ] 539.6kB/375MB 45fd2fec8a19 Downloading [==================================================>] 1.103kB/1.103kB 45fd2fec8a19 Verifying Checksum 45fd2fec8a19 Download complete 6ac0e4adf315 Extracting [=====================> ] 26.18MB/62.07MB 55f2b468da67 Extracting [====> ] 24.51MB/257.9MB 8f10199ed94b Downloading [> ] 97.22kB/8.768MB 04f6155c873d Downloading [=========================================> ] 89.75MB/107.3MB 18ce86a3284e Extracting [=========================================> ] 151.5MB/182.3MB 55f2b468da67 Extracting [======> ] 32.87MB/257.9MB eabd8714fec9 Downloading [> ] 6.487MB/375MB 6ac0e4adf315 Extracting [=======================> ] 29.52MB/62.07MB 4ba79830ebce Extracting [=======================================> ] 133.1MB/166.8MB 8f10199ed94b Downloading [=========================> ] 4.423MB/8.768MB eca0188f477e Extracting [> ] 393.2kB/37.17MB 8f10199ed94b Verifying Checksum 8f10199ed94b Download complete 04f6155c873d Downloading [=============================================> ] 98.4MB/107.3MB f963a77d2726 Downloading [=======> ] 3.01kB/21.44kB f963a77d2726 Downloading [==================================================>] 21.44kB/21.44kB f963a77d2726 Verifying Checksum f963a77d2726 Download complete ed54a7dee1d8 Pull complete 18ce86a3284e Extracting [==========================================> ] 156MB/182.3MB eabd8714fec9 Downloading [==> ] 17.3MB/375MB 55f2b468da67 Extracting [=======> ] 39.55MB/257.9MB 4ba79830ebce Extracting [========================================> ] 135.4MB/166.8MB 6ac0e4adf315 Extracting [=========================> ] 31.75MB/62.07MB f3a82e9f1761 Downloading [> ] 457.7kB/44.41MB 04f6155c873d Verifying Checksum 04f6155c873d Download complete 79161a3f5362 Downloading [================================> ] 3.011kB/4.656kB 79161a3f5362 Downloading [==================================================>] 4.656kB/4.656kB 79161a3f5362 Verifying Checksum 79161a3f5362 Download complete eca0188f477e Extracting [====> ] 3.539MB/37.17MB 9c266ba63f51 Downloading [==================================================>] 1.105kB/1.105kB 9c266ba63f51 Verifying Checksum 9c266ba63f51 Download complete 2e8a7df9c2ee Downloading [==================================================>] 851B/851B 2e8a7df9c2ee Verifying Checksum 2e8a7df9c2ee Download complete eabd8714fec9 Downloading [===> ] 28.11MB/375MB 18ce86a3284e Extracting [============================================> ] 162.7MB/182.3MB 55f2b468da67 Extracting [=========> ] 47.91MB/257.9MB 6ac0e4adf315 Extracting [=================================> ] 41.78MB/62.07MB 4ba79830ebce Extracting [=========================================> ] 137MB/166.8MB 10f05dd8b1db Downloading [==================================================>] 98B/98B 10f05dd8b1db Verifying Checksum 10f05dd8b1db Download complete 41dac8b43ba6 Downloading [==================================================>] 171B/171B 41dac8b43ba6 Verifying Checksum 41dac8b43ba6 Download complete f3a82e9f1761 Downloading [===> ] 2.751MB/44.41MB eca0188f477e Extracting [=======> ] 5.898MB/37.17MB 71a9f6a9ab4d Downloading [> ] 3.009kB/230.6kB 71a9f6a9ab4d Downloading [==================================================>] 230.6kB/230.6kB 71a9f6a9ab4d Verifying Checksum 71a9f6a9ab4d Download complete eabd8714fec9 Downloading [=====> ] 38.93MB/375MB 18ce86a3284e Extracting [=============================================> ] 167.7MB/182.3MB 55f2b468da67 Extracting [=========> ] 51.25MB/257.9MB 6ac0e4adf315 Extracting [=======================================> ] 49.58MB/62.07MB 12c5c803443f Extracting [==================================================>] 116B/116B 12c5c803443f Extracting [==================================================>] 116B/116B 3f8d5c908dcc Pull complete f3a82e9f1761 Downloading [=======> ] 6.421MB/44.41MB 4ba79830ebce Extracting [=========================================> ] 139.8MB/166.8MB eca0188f477e Extracting [=========> ] 7.078MB/37.17MB eabd8714fec9 Downloading [======> ] 50.82MB/375MB 18ce86a3284e Extracting [===============================================> ] 171.6MB/182.3MB 55f2b468da67 Extracting [==========> ] 56.26MB/257.9MB 6ac0e4adf315 Extracting [==========================================> ] 52.92MB/62.07MB da3ed5db7103 Downloading [> ] 539.6kB/127.4MB f3a82e9f1761 Downloading [=================> ] 15.6MB/44.41MB 4ba79830ebce Extracting [==========================================> ] 143.2MB/166.8MB eca0188f477e Extracting [=============> ] 9.83MB/37.17MB 30bb92ff0608 Extracting [> ] 98.3kB/8.735MB eabd8714fec9 Downloading [========> ] 64.34MB/375MB 18ce86a3284e Extracting [=================================================> ] 179.9MB/182.3MB 55f2b468da67 Extracting [===========> ] 60.16MB/257.9MB 6ac0e4adf315 Extracting [===============================================> ] 58.49MB/62.07MB da3ed5db7103 Downloading [> ] 1.621MB/127.4MB 18ce86a3284e Extracting [==================================================>] 182.3MB/182.3MB f3a82e9f1761 Downloading [========================> ] 22.02MB/44.41MB eca0188f477e Extracting [===============> ] 11.8MB/37.17MB 4ba79830ebce Extracting [===========================================> ] 145.4MB/166.8MB eabd8714fec9 Downloading [==========> ] 77.86MB/375MB 30bb92ff0608 Extracting [==> ] 393.2kB/8.735MB 55f2b468da67 Extracting [=============> ] 67.4MB/257.9MB f3a82e9f1761 Downloading [=================================> ] 29.82MB/44.41MB 4ba79830ebce Extracting [============================================> ] 149.3MB/166.8MB eca0188f477e Extracting [====================> ] 14.94MB/37.17MB 6ac0e4adf315 Extracting [=================================================> ] 61.83MB/62.07MB da3ed5db7103 Downloading [=> ] 3.243MB/127.4MB 6ac0e4adf315 Extracting [==================================================>] 62.07MB/62.07MB eabd8714fec9 Downloading [===========> ] 87.59MB/375MB 55f2b468da67 Extracting [==============> ] 74.65MB/257.9MB 30bb92ff0608 Extracting [=============> ] 2.359MB/8.735MB f3a82e9f1761 Downloading [==========================================> ] 37.62MB/44.41MB 18ce86a3284e Pull complete 12c5c803443f Pull complete eca0188f477e Extracting [========================> ] 18.09MB/37.17MB 4ba79830ebce Extracting [=============================================> ] 152.6MB/166.8MB da3ed5db7103 Downloading [==> ] 5.406MB/127.4MB f3a82e9f1761 Verifying Checksum f3a82e9f1761 Download complete eabd8714fec9 Downloading [=============> ] 101.1MB/375MB 30bb92ff0608 Extracting [================================> ] 5.702MB/8.735MB 55f2b468da67 Extracting [===============> ] 82.44MB/257.9MB c955f6e31a04 Downloading [===========================================> ] 3.011kB/3.446kB c955f6e31a04 Downloading [==================================================>] 3.446kB/3.446kB c955f6e31a04 Verifying Checksum c955f6e31a04 Download complete eca0188f477e Extracting [================================> ] 23.99MB/37.17MB 4ba79830ebce Extracting [==============================================> ] 156MB/166.8MB da3ed5db7103 Downloading [==> ] 7.568MB/127.4MB eabd8714fec9 Downloading [===============> ] 115.7MB/375MB 30bb92ff0608 Extracting [===============================================> ] 8.356MB/8.735MB 55f2b468da67 Extracting [==================> ] 94.14MB/257.9MB 30bb92ff0608 Extracting [==================================================>] 8.735MB/8.735MB eca0188f477e Extracting [===================================> ] 26.74MB/37.17MB 4ba79830ebce Extracting [===============================================> ] 158.8MB/166.8MB da3ed5db7103 Downloading [====> ] 10.81MB/127.4MB eabd8714fec9 Downloading [=================> ] 128.1MB/375MB 55f2b468da67 Extracting [===================> ] 101.9MB/257.9MB eca0188f477e Extracting [=========================================> ] 30.67MB/37.17MB da3ed5db7103 Downloading [=====> ] 15.14MB/127.4MB 4ba79830ebce Extracting [================================================> ] 162.1MB/166.8MB eabd8714fec9 Downloading [===================> ] 144.9MB/375MB 55f2b468da67 Extracting [====================> ] 108.1MB/257.9MB da3ed5db7103 Downloading [=========> ] 25.41MB/127.4MB 4ba79830ebce Extracting [=================================================> ] 164.3MB/166.8MB eca0188f477e Extracting [============================================> ] 33.03MB/37.17MB eabd8714fec9 Downloading [=====================> ] 161.7MB/375MB 55f2b468da67 Extracting [======================> ] 113.6MB/257.9MB da3ed5db7103 Downloading [===============> ] 38.39MB/127.4MB eabd8714fec9 Downloading [=======================> ] 175.7MB/375MB 4ba79830ebce Extracting [=================================================> ] 166.6MB/166.8MB eca0188f477e Extracting [==============================================> ] 34.6MB/37.17MB 55f2b468da67 Extracting [======================> ] 117MB/257.9MB e27c75a98748 Extracting [==================================================>] 3.144kB/3.144kB 4ba79830ebce Extracting [==================================================>] 166.8MB/166.8MB e27c75a98748 Extracting [==================================================>] 3.144kB/3.144kB da3ed5db7103 Downloading [====================> ] 52.44MB/127.4MB eabd8714fec9 Downloading [=========================> ] 189.8MB/375MB 098efa8b34b7 Extracting [==================================================>] 1.154kB/1.154kB 098efa8b34b7 Extracting [==================================================>] 1.154kB/1.154kB eabd8714fec9 Downloading [=========================> ] 192.5MB/375MB 55f2b468da67 Extracting [=======================> ] 119.8MB/257.9MB da3ed5db7103 Downloading [=======================> ] 60.01MB/127.4MB eca0188f477e Extracting [================================================> ] 36.18MB/37.17MB eca0188f477e Extracting [==================================================>] 37.17MB/37.17MB eabd8714fec9 Downloading [===========================> ] 207.6MB/375MB da3ed5db7103 Downloading [=============================> ] 74.07MB/127.4MB 55f2b468da67 Extracting [========================> ] 124.8MB/257.9MB eabd8714fec9 Downloading [=============================> ] 224.9MB/375MB da3ed5db7103 Downloading [===================================> ] 89.75MB/127.4MB 55f2b468da67 Extracting [=========================> ] 130.4MB/257.9MB da3ed5db7103 Downloading [=========================================> ] 106.5MB/127.4MB eabd8714fec9 Downloading [================================> ] 243.3MB/375MB 55f2b468da67 Extracting [==========================> ] 136.5MB/257.9MB da3ed5db7103 Downloading [================================================> ] 123.8MB/127.4MB eabd8714fec9 Downloading [==================================> ] 260.6MB/375MB da3ed5db7103 Verifying Checksum da3ed5db7103 Download complete 55f2b468da67 Extracting [===========================> ] 142MB/257.9MB eabd8714fec9 Downloading [====================================> ] 277.4MB/375MB 55f2b468da67 Extracting [============================> ] 147.6MB/257.9MB eabd8714fec9 Downloading [=======================================> ] 294.1MB/375MB 55f2b468da67 Extracting [=============================> ] 152.6MB/257.9MB eabd8714fec9 Downloading [=========================================> ] 312MB/375MB 55f2b468da67 Extracting [==============================> ] 158.2MB/257.9MB eabd8714fec9 Downloading [===========================================> ] 328.2MB/375MB 55f2b468da67 Extracting [===============================> ] 164.9MB/257.9MB eabd8714fec9 Downloading [==============================================> ] 346.6MB/375MB 6ac0e4adf315 Pull complete 30bb92ff0608 Pull complete eabd8714fec9 Downloading [===============================================> ] 356.3MB/375MB 55f2b468da67 Extracting [=================================> ] 170.5MB/257.9MB eabd8714fec9 Downloading [=================================================> ] 372MB/375MB eabd8714fec9 Verifying Checksum eabd8714fec9 Download complete 55f2b468da67 Extracting [=================================> ] 172.1MB/257.9MB 55f2b468da67 Extracting [=================================> ] 173.8MB/257.9MB 55f2b468da67 Extracting [==================================> ] 175.5MB/257.9MB eca0188f477e Pull complete 4ba79830ebce Pull complete 098efa8b34b7 Pull complete e27c75a98748 Pull complete 55f2b468da67 Extracting [==================================> ] 178.3MB/257.9MB f3b09c502777 Extracting [> ] 557.1kB/56.52MB 807a2e881ecd Extracting [============================> ] 32.77kB/58.07kB 807a2e881ecd Extracting [==================================================>] 58.07kB/58.07kB 55f2b468da67 Extracting [==================================> ] 178.8MB/257.9MB f3b09c502777 Extracting [==> ] 2.785MB/56.52MB 55f2b468da67 Extracting [===================================> ] 184.4MB/257.9MB f3b09c502777 Extracting [======> ] 7.799MB/56.52MB 55f2b468da67 Extracting [====================================> ] 190.5MB/257.9MB f3b09c502777 Extracting [==========> ] 11.7MB/56.52MB 55f2b468da67 Extracting [=====================================> ] 194.4MB/257.9MB f3b09c502777 Extracting [==============> ] 16.15MB/56.52MB 55f2b468da67 Extracting [======================================> ] 196.1MB/257.9MB e444bcd4d577 Extracting [==================================================>] 279B/279B e444bcd4d577 Extracting [==================================================>] 279B/279B f3b09c502777 Extracting [================> ] 18.94MB/56.52MB 55f2b468da67 Extracting [======================================> ] 196.6MB/257.9MB f3b09c502777 Extracting [=================> ] 20.05MB/56.52MB 55f2b468da67 Extracting [======================================> ] 199.4MB/257.9MB f3b09c502777 Extracting [======================> ] 25.62MB/56.52MB f3b09c502777 Extracting [========================> ] 27.3MB/56.52MB 55f2b468da67 Extracting [=======================================> ] 202.2MB/257.9MB 807a2e881ecd Pull complete f3b09c502777 Extracting [================================> ] 36.21MB/56.52MB 55f2b468da67 Extracting [=======================================> ] 203.3MB/257.9MB d223479d7367 Extracting [> ] 98.3kB/6.742MB f3b09c502777 Extracting [===========================================> ] 49.02MB/56.52MB 55f2b468da67 Extracting [=======================================> ] 205MB/257.9MB e73cb4a42719 Extracting [> ] 557.1kB/109.1MB 614e034e242f Extracting [==================================================>] 1.126kB/1.126kB 614e034e242f Extracting [==================================================>] 1.126kB/1.126kB d223479d7367 Extracting [==> ] 294.9kB/6.742MB e444bcd4d577 Pull complete 4a4d0948b0bf Extracting [==================================================>] 27.78kB/27.78kB 4a4d0948b0bf Extracting [==================================================>] 27.78kB/27.78kB f3b09c502777 Extracting [================================================> ] 55.15MB/56.52MB e73cb4a42719 Extracting [==> ] 4.456MB/109.1MB d223479d7367 Extracting [======> ] 884.7kB/6.742MB 55f2b468da67 Extracting [========================================> ] 206.7MB/257.9MB d223479d7367 Extracting [================> ] 2.261MB/6.742MB f3b09c502777 Extracting [=================================================> ] 56.26MB/56.52MB e73cb4a42719 Extracting [===> ] 6.685MB/109.1MB f3b09c502777 Extracting [==================================================>] 56.52MB/56.52MB e73cb4a42719 Extracting [===> ] 7.242MB/109.1MB d223479d7367 Extracting [======================> ] 3.047MB/6.742MB d223479d7367 Extracting [========================> ] 3.244MB/6.742MB e73cb4a42719 Extracting [===> ] 7.799MB/109.1MB 614e034e242f Pull complete f3b09c502777 Pull complete 4a4d0948b0bf Pull complete 408012a7b118 Extracting [==================================================>] 637B/637B 408012a7b118 Extracting [==================================================>] 637B/637B simulator Pulled 55f2b468da67 Extracting [========================================> ] 207.8MB/257.9MB d223479d7367 Extracting [==================================> ] 4.62MB/6.742MB eabd8714fec9 Extracting [> ] 557.1kB/375MB e73cb4a42719 Extracting [====> ] 10.58MB/109.1MB 04f6155c873d Extracting [> ] 557.1kB/107.3MB 408012a7b118 Pull complete 44986281b8b9 Extracting [==================================================>] 4.022kB/4.022kB 44986281b8b9 Extracting [==================================================>] 4.022kB/4.022kB 55f2b468da67 Extracting [========================================> ] 210.6MB/257.9MB eabd8714fec9 Extracting [=> ] 8.356MB/375MB d223479d7367 Extracting [==========================================> ] 5.702MB/6.742MB e73cb4a42719 Extracting [======> ] 13.93MB/109.1MB 04f6155c873d Extracting [=> ] 2.785MB/107.3MB 55f2b468da67 Extracting [=========================================> ] 212.2MB/257.9MB 44986281b8b9 Pull complete bf70c5107ab5 Extracting [==================================================>] 1.44kB/1.44kB bf70c5107ab5 Extracting [==================================================>] 1.44kB/1.44kB eabd8714fec9 Extracting [==> ] 15.04MB/375MB d223479d7367 Extracting [==================================================>] 6.742MB/6.742MB e73cb4a42719 Extracting [=======> ] 17.27MB/109.1MB 04f6155c873d Extracting [==> ] 5.014MB/107.3MB eabd8714fec9 Extracting [==> ] 18.38MB/375MB 55f2b468da67 Extracting [=========================================> ] 213.4MB/257.9MB e73cb4a42719 Extracting [=========> ] 21.17MB/109.1MB 04f6155c873d Extracting [===> ] 6.685MB/107.3MB e73cb4a42719 Extracting [==========> ] 22.28MB/109.1MB eabd8714fec9 Extracting [==> ] 21.73MB/375MB 04f6155c873d Extracting [===> ] 7.242MB/107.3MB 55f2b468da67 Extracting [=========================================> ] 214.5MB/257.9MB e73cb4a42719 Extracting [===========> ] 25.07MB/109.1MB 04f6155c873d Extracting [====> ] 10.58MB/107.3MB 55f2b468da67 Extracting [==========================================> ] 216.7MB/257.9MB e73cb4a42719 Extracting [============> ] 26.74MB/109.1MB eabd8714fec9 Extracting [===> ] 23.95MB/375MB 04f6155c873d Extracting [=======> ] 15.04MB/107.3MB 55f2b468da67 Extracting [==========================================> ] 219.5MB/257.9MB e73cb4a42719 Extracting [=============> ] 29.52MB/109.1MB eabd8714fec9 Extracting [===> ] 28.41MB/375MB eabd8714fec9 Extracting [====> ] 35.09MB/375MB e73cb4a42719 Extracting [===============> ] 32.87MB/109.1MB 55f2b468da67 Extracting [===========================================> ] 222.3MB/257.9MB 04f6155c873d Extracting [========> ] 17.27MB/107.3MB eabd8714fec9 Extracting [=====> ] 44.56MB/375MB e73cb4a42719 Extracting [================> ] 36.77MB/109.1MB 55f2b468da67 Extracting [===========================================> ] 224.5MB/257.9MB 04f6155c873d Extracting [========> ] 18.38MB/107.3MB eabd8714fec9 Extracting [======> ] 51.25MB/375MB e73cb4a42719 Extracting [==================> ] 40.67MB/109.1MB 55f2b468da67 Extracting [===========================================> ] 226.7MB/257.9MB 04f6155c873d Extracting [==========> ] 22.28MB/107.3MB bf70c5107ab5 Pull complete d223479d7367 Pull complete e73cb4a42719 Extracting [====================> ] 45.68MB/109.1MB eabd8714fec9 Extracting [========> ] 60.72MB/375MB 04f6155c873d Extracting [===========> ] 25.62MB/107.3MB 55f2b468da67 Extracting [============================================> ] 228.4MB/257.9MB e73cb4a42719 Extracting [=======================> ] 50.69MB/109.1MB eabd8714fec9 Extracting [========> ] 66.85MB/375MB 55f2b468da67 Extracting [============================================> ] 230.1MB/257.9MB 04f6155c873d Extracting [=============> ] 29.52MB/107.3MB eabd8714fec9 Extracting [==========> ] 75.2MB/375MB 04f6155c873d Extracting [==============> ] 31.2MB/107.3MB e73cb4a42719 Extracting [========================> ] 52.36MB/109.1MB ece604b40811 Extracting [==================================================>] 303B/303B ece604b40811 Extracting [==================================================>] 303B/303B 55f2b468da67 Extracting [============================================> ] 231.2MB/257.9MB eabd8714fec9 Extracting [==========> ] 80.22MB/375MB 04f6155c873d Extracting [===============> ] 33.42MB/107.3MB e73cb4a42719 Extracting [========================> ] 53.48MB/109.1MB eabd8714fec9 Extracting [===========> ] 88.01MB/375MB 55f2b468da67 Extracting [=============================================> ] 232.3MB/257.9MB 04f6155c873d Extracting [================> ] 36.21MB/107.3MB e73cb4a42719 Extracting [=========================> ] 55.15MB/109.1MB 1ccde423731d Extracting [==========================> ] 32.77kB/61.44kB 1ccde423731d Extracting [==================================================>] 61.44kB/61.44kB eabd8714fec9 Extracting [============> ] 93.59MB/375MB 04f6155c873d Extracting [=================> ] 37.32MB/107.3MB e73cb4a42719 Extracting [=========================> ] 56.26MB/109.1MB 55f2b468da67 Extracting [=============================================> ] 234MB/257.9MB eabd8714fec9 Extracting [============> ] 95.26MB/375MB 04f6155c873d Extracting [==================> ] 39.55MB/107.3MB e73cb4a42719 Extracting [===========================> ] 59.05MB/109.1MB eabd8714fec9 Extracting [=============> ] 99.71MB/375MB 55f2b468da67 Extracting [=============================================> ] 236.2MB/257.9MB e73cb4a42719 Extracting [===========================> ] 60.72MB/109.1MB 04f6155c873d Extracting [===================> ] 42.89MB/107.3MB eabd8714fec9 Extracting [=============> ] 103.6MB/375MB 55f2b468da67 Extracting [==============================================> ] 240.6MB/257.9MB e73cb4a42719 Extracting [=============================> ] 65.18MB/109.1MB 04f6155c873d Extracting [=====================> ] 46.79MB/107.3MB eabd8714fec9 Extracting [==============> ] 105.8MB/375MB 04f6155c873d Extracting [======================> ] 49.02MB/107.3MB e73cb4a42719 Extracting [==============================> ] 67.4MB/109.1MB eabd8714fec9 Extracting [==============> ] 108.1MB/375MB 04f6155c873d Extracting [=======================> ] 51.25MB/107.3MB e73cb4a42719 Extracting [================================> ] 70.75MB/109.1MB eabd8714fec9 Extracting [==============> ] 110.9MB/375MB 55f2b468da67 Extracting [===============================================> ] 244.5MB/257.9MB e73cb4a42719 Extracting [=================================> ] 74.09MB/109.1MB eabd8714fec9 Extracting [===============> ] 113.6MB/375MB 04f6155c873d Extracting [=========================> ] 54.59MB/107.3MB 55f2b468da67 Extracting [================================================> ] 251.8MB/257.9MB e73cb4a42719 Extracting [===================================> ] 77.43MB/109.1MB 04f6155c873d Extracting [===========================> ] 57.93MB/107.3MB ece604b40811 Pull complete 55f2b468da67 Extracting [=================================================> ] 253.5MB/257.9MB eabd8714fec9 Extracting [===============> ] 116.4MB/375MB 04f6155c873d Extracting [=============================> ] 62.39MB/107.3MB e73cb4a42719 Extracting [=====================================> ] 81.89MB/109.1MB 55f2b468da67 Extracting [=================================================> ] 255.7MB/257.9MB e73cb4a42719 Extracting [========================================> ] 88.01MB/109.1MB 04f6155c873d Extracting [=============================> ] 62.95MB/107.3MB eabd8714fec9 Extracting [===============> ] 119.8MB/375MB 55f2b468da67 Extracting [==================================================>] 257.9MB/257.9MB 55f2b468da67 Extracting [==================================================>] 257.9MB/257.9MB e73cb4a42719 Extracting [==========================================> ] 91.91MB/109.1MB 04f6155c873d Extracting [==============================> ] 65.73MB/107.3MB eabd8714fec9 Extracting [================> ] 122.6MB/375MB e73cb4a42719 Extracting [==========================================> ] 93.59MB/109.1MB 04f6155c873d Extracting [===============================> ] 67.96MB/107.3MB eabd8714fec9 Extracting [================> ] 126.5MB/375MB e73cb4a42719 Extracting [============================================> ] 96.93MB/109.1MB 04f6155c873d Extracting [=================================> ] 72.42MB/107.3MB eabd8714fec9 Extracting [=================> ] 129.2MB/375MB e73cb4a42719 Extracting [=============================================> ] 99.16MB/109.1MB 04f6155c873d Extracting [===================================> ] 75.76MB/107.3MB eabd8714fec9 Extracting [=================> ] 132MB/375MB c01e672f2391 Extracting [> ] 557.1kB/263.6MB e73cb4a42719 Extracting [==============================================> ] 101.9MB/109.1MB 04f6155c873d Extracting [=====================================> ] 79.66MB/107.3MB eabd8714fec9 Extracting [==================> ] 137MB/375MB c01e672f2391 Extracting [> ] 1.114MB/263.6MB e73cb4a42719 Extracting [===============================================> ] 104.2MB/109.1MB 04f6155c873d Extracting [======================================> ] 83MB/107.3MB eabd8714fec9 Extracting [==================> ] 140.4MB/375MB c01e672f2391 Extracting [=> ] 9.47MB/263.6MB 1ccde423731d Pull complete e73cb4a42719 Extracting [================================================> ] 105.8MB/109.1MB 04f6155c873d Extracting [=======================================> ] 85.23MB/107.3MB eabd8714fec9 Extracting [===================> ] 145.4MB/375MB c01e672f2391 Extracting [===> ] 17.27MB/263.6MB 04f6155c873d Extracting [=========================================> ] 88.57MB/107.3MB eabd8714fec9 Extracting [===================> ] 148.2MB/375MB c01e672f2391 Extracting [=====> ] 27.85MB/263.6MB c01e672f2391 Extracting [=====> ] 28.41MB/263.6MB eabd8714fec9 Extracting [===================> ] 149.3MB/375MB 04f6155c873d Extracting [==========================================> ] 90.24MB/107.3MB e73cb4a42719 Extracting [=================================================> ] 107.5MB/109.1MB c01e672f2391 Extracting [======> ] 36.21MB/263.6MB e73cb4a42719 Extracting [==================================================>] 109.1MB/109.1MB 04f6155c873d Extracting [===========================================> ] 94.14MB/107.3MB eabd8714fec9 Extracting [====================> ] 151.5MB/375MB c01e672f2391 Extracting [========> ] 46.79MB/263.6MB 04f6155c873d Extracting [==============================================> ] 99.16MB/107.3MB eabd8714fec9 Extracting [====================> ] 154.3MB/375MB c01e672f2391 Extracting [=========> ] 52.36MB/263.6MB 04f6155c873d Extracting [==============================================> ] 100.3MB/107.3MB c01e672f2391 Extracting [==========> ] 56.82MB/263.6MB eabd8714fec9 Extracting [====================> ] 156.5MB/375MB 04f6155c873d Extracting [================================================> ] 103.6MB/107.3MB c01e672f2391 Extracting [============> ] 66.85MB/263.6MB 7221d93db8a9 Extracting [==================================================>] 100B/100B 7221d93db8a9 Extracting [==================================================>] 100B/100B eabd8714fec9 Extracting [=====================> ] 159.3MB/375MB 04f6155c873d Extracting [================================================> ] 104.7MB/107.3MB c01e672f2391 Extracting [===============> ] 79.1MB/263.6MB eabd8714fec9 Extracting [=====================> ] 162.1MB/375MB 04f6155c873d Extracting [==================================================>] 107.3MB/107.3MB c01e672f2391 Extracting [=================> ] 89.69MB/263.6MB eabd8714fec9 Extracting [======================> ] 166.6MB/375MB c01e672f2391 Extracting [===================> ] 103.6MB/263.6MB eabd8714fec9 Extracting [======================> ] 171.6MB/375MB c01e672f2391 Extracting [=====================> ] 114.2MB/263.6MB eabd8714fec9 Extracting [========================> ] 181.6MB/375MB c01e672f2391 Extracting [=======================> ] 124.8MB/263.6MB 55f2b468da67 Pull complete eabd8714fec9 Extracting [========================> ] 187.2MB/375MB c01e672f2391 Extracting [========================> ] 127.6MB/263.6MB eabd8714fec9 Extracting [==========================> ] 200MB/375MB c01e672f2391 Extracting [=========================> ] 137MB/263.6MB eabd8714fec9 Extracting [===========================> ] 205MB/375MB c01e672f2391 Extracting [==========================> ] 138.7MB/263.6MB eabd8714fec9 Extracting [============================> ] 210.6MB/375MB c01e672f2391 Extracting [==========================> ] 140.4MB/263.6MB c01e672f2391 Extracting [===========================> ] 145.4MB/263.6MB eabd8714fec9 Extracting [============================> ] 217.3MB/375MB c01e672f2391 Extracting [=============================> ] 156MB/263.6MB eabd8714fec9 Extracting [=============================> ] 221.2MB/375MB eabd8714fec9 Extracting [=============================> ] 224.5MB/375MB c01e672f2391 Extracting [===============================> ] 165.4MB/263.6MB 82bfc142787e Extracting [> ] 98.3kB/8.613MB c01e672f2391 Extracting [================================> ] 173.8MB/263.6MB eabd8714fec9 Extracting [==============================> ] 227.3MB/375MB c01e672f2391 Extracting [=================================> ] 177.1MB/263.6MB 82bfc142787e Extracting [==> ] 491.5kB/8.613MB eabd8714fec9 Extracting [==============================> ] 229.5MB/375MB c01e672f2391 Extracting [===================================> ] 188.8MB/263.6MB 82bfc142787e Extracting [=================================================> ] 8.454MB/8.613MB 82bfc142787e Extracting [==================================================>] 8.613MB/8.613MB eabd8714fec9 Extracting [===============================> ] 235.1MB/375MB c01e672f2391 Extracting [======================================> ] 200.5MB/263.6MB eabd8714fec9 Extracting [================================> ] 241.2MB/375MB c01e672f2391 Extracting [========================================> ] 214.5MB/263.6MB eabd8714fec9 Extracting [================================> ] 246.2MB/375MB e73cb4a42719 Pull complete c01e672f2391 Extracting [===========================================> ] 228.4MB/263.6MB c01e672f2391 Extracting [==============================================> ] 244.5MB/263.6MB eabd8714fec9 Extracting [=================================> ] 249MB/375MB c01e672f2391 Extracting [=================================================> ] 259MB/263.6MB c01e672f2391 Extracting [==================================================>] 263.6MB/263.6MB eabd8714fec9 Extracting [=================================> ] 254MB/375MB eabd8714fec9 Extracting [==================================> ] 260.7MB/375MB eabd8714fec9 Extracting [===================================> ] 266.8MB/375MB 7221d93db8a9 Pull complete 04f6155c873d Pull complete eabd8714fec9 Extracting [===================================> ] 268.5MB/375MB eabd8714fec9 Extracting [====================================> ] 270.2MB/375MB eabd8714fec9 Extracting [====================================> ] 271.8MB/375MB eabd8714fec9 Extracting [====================================> ] 273.5MB/375MB a83b68436f09 Extracting [==================================================>] 9.919kB/9.919kB a83b68436f09 Extracting [==================================================>] 9.919kB/9.919kB eabd8714fec9 Extracting [====================================> ] 275.2MB/375MB eabd8714fec9 Extracting [=====================================> ] 280.2MB/375MB eabd8714fec9 Extracting [======================================> ] 286.9MB/375MB eabd8714fec9 Extracting [======================================> ] 292.5MB/375MB eabd8714fec9 Extracting [=======================================> ] 295.2MB/375MB 7df673c7455d Extracting [==================================================>] 694B/694B 7df673c7455d Extracting [==================================================>] 694B/694B 82bfc142787e Pull complete c01e672f2391 Pull complete a83b68436f09 Pull complete 46baca71a4ef Extracting [==================================================>] 18.11kB/18.11kB 46baca71a4ef Extracting [==================================================>] 18.11kB/18.11kB 787d6bee9571 Extracting [==================================================>] 127B/127B apex-pdp Pulled eabd8714fec9 Extracting [=======================================> ] 296.4MB/375MB 7df673c7455d Pull complete 787d6bee9571 Pull complete prometheus Pulled 13ff0988aaea Extracting [==================================================>] 167B/167B 13ff0988aaea Extracting [==================================================>] 167B/167B 85dde7dceb0a Extracting [> ] 557.1kB/63.48MB eabd8714fec9 Extracting [=======================================> ] 298.6MB/375MB 13ff0988aaea Pull complete 46baca71a4ef Pull complete 4b82842ab819 Extracting [==================================================>] 5.415kB/5.415kB 4b82842ab819 Extracting [==================================================>] 5.415kB/5.415kB 85dde7dceb0a Extracting [> ] 1.114MB/63.48MB eabd8714fec9 Extracting [=======================================> ] 299.7MB/375MB b0e0ef7895f4 Extracting [> ] 393.2kB/37.01MB 4b82842ab819 Pull complete 7e568a0dc8fb Extracting [==================================================>] 184B/184B 7e568a0dc8fb Extracting [==================================================>] 184B/184B 85dde7dceb0a Extracting [=> ] 1.671MB/63.48MB eabd8714fec9 Extracting [========================================> ] 301.9MB/375MB b0e0ef7895f4 Extracting [===========> ] 8.651MB/37.01MB eabd8714fec9 Extracting [========================================> ] 304.2MB/375MB b0e0ef7895f4 Extracting [=======================> ] 17.3MB/37.01MB 7e568a0dc8fb Pull complete postgres Pulled 85dde7dceb0a Extracting [==> ] 2.785MB/63.48MB b0e0ef7895f4 Extracting [===================================> ] 25.95MB/37.01MB eabd8714fec9 Extracting [========================================> ] 305.8MB/375MB b0e0ef7895f4 Extracting [=============================================> ] 33.82MB/37.01MB b0e0ef7895f4 Extracting [==================================================>] 37.01MB/37.01MB 85dde7dceb0a Extracting [===> ] 4.456MB/63.48MB b0e0ef7895f4 Pull complete c0c90eeb8aca Extracting [==================================================>] 1.105kB/1.105kB c0c90eeb8aca Extracting [==================================================>] 1.105kB/1.105kB eabd8714fec9 Extracting [========================================> ] 306.9MB/375MB 85dde7dceb0a Extracting [===> ] 5.014MB/63.48MB c0c90eeb8aca Pull complete 5cfb27c10ea5 Extracting [==================================================>] 852B/852B 5cfb27c10ea5 Extracting [==================================================>] 852B/852B eabd8714fec9 Extracting [=========================================> ] 309.7MB/375MB 85dde7dceb0a Extracting [======> ] 7.799MB/63.48MB eabd8714fec9 Extracting [=========================================> ] 312MB/375MB 5cfb27c10ea5 Pull complete 40a5eed61bb0 Extracting [==================================================>] 98B/98B 40a5eed61bb0 Extracting [==================================================>] 98B/98B 85dde7dceb0a Extracting [=======> ] 9.47MB/63.48MB eabd8714fec9 Extracting [=========================================> ] 313.6MB/375MB 40a5eed61bb0 Pull complete e040ea11fa10 Extracting [==================================================>] 173B/173B e040ea11fa10 Extracting [==================================================>] 173B/173B 85dde7dceb0a Extracting [========> ] 11.14MB/63.48MB eabd8714fec9 Extracting [==========================================> ] 315.9MB/375MB 85dde7dceb0a Extracting [==========> ] 12.81MB/63.48MB e040ea11fa10 Pull complete eabd8714fec9 Extracting [==========================================> ] 319.2MB/375MB 85dde7dceb0a Extracting [============> ] 15.6MB/63.48MB 09d5a3f70313 Extracting [> ] 557.1kB/109.2MB eabd8714fec9 Extracting [===========================================> ] 323.1MB/375MB 85dde7dceb0a Extracting [=============> ] 16.71MB/63.48MB 09d5a3f70313 Extracting [=====> ] 11.7MB/109.2MB eabd8714fec9 Extracting [===========================================> ] 326.4MB/375MB 85dde7dceb0a Extracting [==============> ] 18.94MB/63.48MB 09d5a3f70313 Extracting [==========> ] 23.4MB/109.2MB eabd8714fec9 Extracting [===========================================> ] 328.1MB/375MB 85dde7dceb0a Extracting [=================> ] 22.28MB/63.48MB 09d5a3f70313 Extracting [===============> ] 33.98MB/109.2MB eabd8714fec9 Extracting [===========================================> ] 329.8MB/375MB 09d5a3f70313 Extracting [=====================> ] 46.24MB/109.2MB 85dde7dceb0a Extracting [===================> ] 25.07MB/63.48MB eabd8714fec9 Extracting [============================================> ] 331.4MB/375MB 09d5a3f70313 Extracting [===========================> ] 60.16MB/109.2MB 85dde7dceb0a Extracting [=====================> ] 27.85MB/63.48MB eabd8714fec9 Extracting [============================================> ] 332.6MB/375MB 09d5a3f70313 Extracting [=================================> ] 72.97MB/109.2MB 85dde7dceb0a Extracting [========================> ] 30.64MB/63.48MB eabd8714fec9 Extracting [============================================> ] 335.9MB/375MB 09d5a3f70313 Extracting [=======================================> ] 86.9MB/109.2MB 85dde7dceb0a Extracting [==========================> ] 33.42MB/63.48MB eabd8714fec9 Extracting [=============================================> ] 339.8MB/375MB 09d5a3f70313 Extracting [============================================> ] 96.93MB/109.2MB 85dde7dceb0a Extracting [============================> ] 35.65MB/63.48MB eabd8714fec9 Extracting [=============================================> ] 340.9MB/375MB 09d5a3f70313 Extracting [===============================================> ] 104.2MB/109.2MB 85dde7dceb0a Extracting [=============================> ] 37.88MB/63.48MB 09d5a3f70313 Extracting [=================================================> ] 107.5MB/109.2MB eabd8714fec9 Extracting [=============================================> ] 342MB/375MB 09d5a3f70313 Extracting [==================================================>] 109.2MB/109.2MB 09d5a3f70313 Extracting [==================================================>] 109.2MB/109.2MB 85dde7dceb0a Extracting [================================> ] 41.22MB/63.48MB 85dde7dceb0a Extracting [==================================> ] 44.01MB/63.48MB 85dde7dceb0a Extracting [======================================> ] 48.46MB/63.48MB eabd8714fec9 Extracting [=============================================> ] 342.6MB/375MB 85dde7dceb0a Extracting [=======================================> ] 50.69MB/63.48MB eabd8714fec9 Extracting [=============================================> ] 343.1MB/375MB 85dde7dceb0a Extracting [==========================================> ] 54.03MB/63.48MB eabd8714fec9 Extracting [==============================================> ] 345.9MB/375MB 85dde7dceb0a Extracting [==============================================> ] 59.05MB/63.48MB eabd8714fec9 Extracting [==============================================> ] 350.9MB/375MB 85dde7dceb0a Extracting [===============================================> ] 60.72MB/63.48MB eabd8714fec9 Extracting [===============================================> ] 354.3MB/375MB 85dde7dceb0a Extracting [=================================================> ] 62.39MB/63.48MB 85dde7dceb0a Extracting [==================================================>] 63.48MB/63.48MB 09d5a3f70313 Pull complete 356f5c2c843b Extracting [==================================================>] 3.623kB/3.623kB 356f5c2c843b Extracting [==================================================>] 3.623kB/3.623kB 85dde7dceb0a Pull complete eabd8714fec9 Extracting [===============================================> ] 357.1MB/375MB 356f5c2c843b Pull complete 7009d5001b77 Extracting [==================================================>] 11.92kB/11.92kB 7009d5001b77 Extracting [==================================================>] 11.92kB/11.92kB kafka Pulled eabd8714fec9 Extracting [===============================================> ] 358.7MB/375MB 7009d5001b77 Pull complete 538deb30e80c Extracting [==================================================>] 1.225kB/1.225kB 538deb30e80c Extracting [==================================================>] 1.225kB/1.225kB eabd8714fec9 Extracting [================================================> ] 366.5MB/375MB 538deb30e80c Pull complete grafana Pulled eabd8714fec9 Extracting [=================================================> ] 368.2MB/375MB eabd8714fec9 Extracting [=================================================> ] 372.1MB/375MB eabd8714fec9 Extracting [==================================================>] 375MB/375MB eabd8714fec9 Pull complete 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB 45fd2fec8a19 Pull complete 8f10199ed94b Extracting [> ] 98.3kB/8.768MB 8f10199ed94b Extracting [=====================> ] 3.834MB/8.768MB 8f10199ed94b Extracting [==================================================>] 8.768MB/8.768MB 8f10199ed94b Pull complete f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB f963a77d2726 Pull complete f3a82e9f1761 Extracting [> ] 458.8kB/44.41MB f3a82e9f1761 Extracting [==============> ] 12.85MB/44.41MB f3a82e9f1761 Extracting [===============================> ] 27.98MB/44.41MB f3a82e9f1761 Extracting [==================================================>] 44.41MB/44.41MB f3a82e9f1761 Pull complete 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB 79161a3f5362 Pull complete 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB 9c266ba63f51 Pull complete 2e8a7df9c2ee Extracting [==================================================>] 851B/851B 2e8a7df9c2ee Extracting [==================================================>] 851B/851B 2e8a7df9c2ee Pull complete 10f05dd8b1db Extracting [==================================================>] 98B/98B 10f05dd8b1db Extracting [==================================================>] 98B/98B 10f05dd8b1db Pull complete 41dac8b43ba6 Extracting [==================================================>] 171B/171B 41dac8b43ba6 Extracting [==================================================>] 171B/171B 41dac8b43ba6 Pull complete 71a9f6a9ab4d Extracting [=======> ] 32.77kB/230.6kB 71a9f6a9ab4d Extracting [==================================================>] 230.6kB/230.6kB 71a9f6a9ab4d Pull complete da3ed5db7103 Extracting [> ] 557.1kB/127.4MB da3ed5db7103 Extracting [=====> ] 14.48MB/127.4MB da3ed5db7103 Extracting [===========> ] 28.97MB/127.4MB da3ed5db7103 Extracting [================> ] 42.34MB/127.4MB da3ed5db7103 Extracting [======================> ] 57.93MB/127.4MB da3ed5db7103 Extracting [===========================> ] 71.3MB/127.4MB da3ed5db7103 Extracting [==================================> ] 88.01MB/127.4MB da3ed5db7103 Extracting [=========================================> ] 106.4MB/127.4MB da3ed5db7103 Extracting [==============================================> ] 119.2MB/127.4MB da3ed5db7103 Extracting [================================================> ] 124.2MB/127.4MB da3ed5db7103 Extracting [==================================================>] 127.4MB/127.4MB da3ed5db7103 Pull complete c955f6e31a04 Extracting [==================================================>] 3.446kB/3.446kB c955f6e31a04 Extracting [==================================================>] 3.446kB/3.446kB c955f6e31a04 Pull complete zookeeper Pulled Network compose_default Creating Network compose_default Created Container simulator Creating Container prometheus Creating Container postgres Creating Container zookeeper Creating Container zookeeper Created Container kafka Creating Container postgres Created Container prometheus Created Container policy-db-migrator Creating Container grafana Creating Container simulator Created Container grafana Created Container policy-db-migrator Created Container policy-api Creating Container kafka Created Container policy-api Created Container policy-pap Creating Container policy-pap Created Container policy-apex-pdp Creating Container policy-apex-pdp Created Container zookeeper Starting Container prometheus Starting Container postgres Starting Container simulator Starting Container simulator Started Container prometheus Started Container grafana Starting Container grafana Started Container zookeeper Started Container kafka Starting Container kafka Started Container postgres Started Container policy-db-migrator Starting Container policy-db-migrator Started Container policy-api Starting Container policy-api Started Container policy-pap Starting Container policy-pap Started Container policy-apex-pdp Starting Container policy-apex-pdp Started Prometheus server: http://localhost:30259 Grafana server: http://localhost:30269 Waiting 1 minute for policy-pap to start... Checking if REST port 30003 is open on localhost ... IMAGE NAMES STATUS nexus3.onap.org:10001/onap/policy-apex-pdp:4.2.1-SNAPSHOT policy-apex-pdp Up About a minute nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up About a minute nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up About a minute nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up About a minute nexus3.onap.org:10001/grafana/grafana:latest grafana Up About a minute nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up About a minute nexus3.onap.org:10001/onap/policy-models-simulator:4.2.1-SNAPSHOT simulator Up About a minute nexus3.onap.org:10001/prom/prometheus:latest prometheus Up About a minute nexus3.onap.org:10001/library/postgres:16.4 postgres Up About a minute Checking if REST port 30001 is open on localhost ... IMAGE NAMES STATUS nexus3.onap.org:10001/onap/policy-apex-pdp:4.2.1-SNAPSHOT policy-apex-pdp Up About a minute nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up About a minute nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up About a minute nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up About a minute nexus3.onap.org:10001/grafana/grafana:latest grafana Up About a minute nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up About a minute nexus3.onap.org:10001/onap/policy-models-simulator:4.2.1-SNAPSHOT simulator Up About a minute nexus3.onap.org:10001/prom/prometheus:latest prometheus Up About a minute nexus3.onap.org:10001/library/postgres:16.4 postgres Up About a minute Cloning into '/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/models'... Building robot framework docker image sha256:eb039b5b77d3d51bfaa2f5a7941dde0a1cafb141b4fb6a00962450b6504886c0 top - 23:14:54 up 4 min, 0 users, load average: 1.94, 1.89, 0.86 Tasks: 234 total, 1 running, 156 sleeping, 0 stopped, 0 zombie %Cpu(s): 14.1 us, 3.5 sy, 0.0 ni, 78.1 id, 4.1 wa, 0.0 hi, 0.1 si, 0.1 st total used free shared buff/cache available Mem: 31G 2.6G 20G 28M 8.1G 28G Swap: 1.0G 0B 1.0G IMAGE NAMES STATUS nexus3.onap.org:10001/onap/policy-apex-pdp:4.2.1-SNAPSHOT policy-apex-pdp Up 2 minutes nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up 2 minutes nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up 2 minutes nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up 2 minutes nexus3.onap.org:10001/grafana/grafana:latest grafana Up 2 minutes nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up 2 minutes nexus3.onap.org:10001/onap/policy-models-simulator:4.2.1-SNAPSHOT simulator Up 2 minutes nexus3.onap.org:10001/prom/prometheus:latest prometheus Up 2 minutes nexus3.onap.org:10001/library/postgres:16.4 postgres Up 2 minutes CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 947e897bfa5f policy-apex-pdp 3.11% 216.6MiB / 31.41GiB 0.67% 49.5kB / 65kB 0B / 0B 51 805647a2088e policy-pap 1.90% 473.2MiB / 31.41GiB 1.47% 131kB / 216kB 0B / 139MB 68 5f548f1219a8 policy-api 0.14% 400.1MiB / 31.41GiB 1.24% 1.15MB / 1.02MB 0B / 0B 59 e38730860a2c kafka 5.20% 381.6MiB / 31.41GiB 1.19% 204kB / 183kB 0B / 614kB 83 3862788423ef grafana 0.17% 98.69MiB / 31.41GiB 0.31% 19MB / 251kB 0B / 30.2MB 19 1452a71132fe zookeeper 0.08% 84.25MiB / 31.41GiB 0.26% 53.6kB / 47.6kB 0B / 348kB 62 46521d786f17 simulator 0.07% 121.5MiB / 31.41GiB 0.38% 1.96kB / 0B 205kB / 0B 64 1090503232da prometheus 0.22% 21.59MiB / 31.41GiB 0.07% 134kB / 6.14kB 0B / 0B 13 8074e2a92d68 postgres 0.01% 85.07MiB / 31.41GiB 0.26% 1.67MB / 1.73MB 0B / 160MB 26 Container policy-csit Creating Container policy-csit Created Attaching to policy-csit policy-csit | Invoking the robot tests from: pap-test.robot pap-slas.robot policy-csit | Run Robot test policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates policy-csit | -v POLICY_API_IP:policy-api:6969 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 policy-csit | -v APEX_IP:policy-apex-pdp:6969 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 policy-csit | -v KAFKA_IP:kafka:9092 policy-csit | -v PROMETHEUS_IP:prometheus:9090 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 policy-csit | -v POLICY_OPA_IP:policy-opa-pdp:8282 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 policy-csit | -v TEMP_FOLDER:/tmp/distribution policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 policy-csit | -v TEST_ENV:docker policy-csit | -v JAEGER_IP:jaeger:16686 policy-csit | Starting Robot test suites ... policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas.Pap-Test policy-csit | ============================================================================== policy-csit | LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | LoadNodeTemplates :: Create node templates in database using speci... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Healthcheck :: Verify policy pap health check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Consolidated Healthcheck :: Verify policy consolidated health check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Metrics :: Verify policy pap is exporting prometheus metrics | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | DeployPdpGroups :: Deploy policies in PdpGroups | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Pap-Test & Pap-Slas.Pap-Test | PASS | policy-csit | 22 tests, 22 passed, 0 failed policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas.Pap-Slas policy-csit | ============================================================================== policy-csit | WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Pap-Test & Pap-Slas.Pap-Slas | PASS | policy-csit | 8 tests, 8 passed, 0 failed policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas | PASS | policy-csit | 30 tests, 30 passed, 0 failed policy-csit | ============================================================================== policy-csit | Output: /tmp/results/output.xml policy-csit | Log: /tmp/results/log.html policy-csit | Report: /tmp/results/report.html policy-csit | RESULT: 0 policy-csit exited with code 0 IMAGE NAMES STATUS nexus3.onap.org:10001/onap/policy-apex-pdp:4.2.1-SNAPSHOT policy-apex-pdp Up 3 minutes nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up 3 minutes nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up 3 minutes nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up 3 minutes nexus3.onap.org:10001/grafana/grafana:latest grafana Up 3 minutes nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up 3 minutes nexus3.onap.org:10001/onap/policy-models-simulator:4.2.1-SNAPSHOT simulator Up 3 minutes nexus3.onap.org:10001/prom/prometheus:latest prometheus Up 3 minutes nexus3.onap.org:10001/library/postgres:16.4 postgres Up 3 minutes Shut down started! Collecting logs from docker compose containers... grafana | logger=settings t=2025-06-19T23:12:46.410673138Z level=info msg="Starting Grafana" version=12.0.2 commit=5bda17e7c1cb313eb96266f2fdda73a6b35c3977 branch=HEAD compiled=2025-06-19T23:12:46Z grafana | logger=settings t=2025-06-19T23:12:46.411040982Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini grafana | logger=settings t=2025-06-19T23:12:46.411053392Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini grafana | logger=settings t=2025-06-19T23:12:46.411057432Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" grafana | logger=settings t=2025-06-19T23:12:46.411060772Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" grafana | logger=settings t=2025-06-19T23:12:46.411064182Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" grafana | logger=settings t=2025-06-19T23:12:46.411067192Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" grafana | logger=settings t=2025-06-19T23:12:46.411069972Z level=info msg="Config overridden from command line" arg="default.log.mode=console" grafana | logger=settings t=2025-06-19T23:12:46.411073152Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" grafana | logger=settings t=2025-06-19T23:12:46.411077032Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" grafana | logger=settings t=2025-06-19T23:12:46.411080262Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" grafana | logger=settings t=2025-06-19T23:12:46.411083362Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" grafana | logger=settings t=2025-06-19T23:12:46.411086372Z level=info msg=Target target=[all] grafana | logger=settings t=2025-06-19T23:12:46.411101043Z level=info msg="Path Home" path=/usr/share/grafana grafana | logger=settings t=2025-06-19T23:12:46.411104163Z level=info msg="Path Data" path=/var/lib/grafana grafana | logger=settings t=2025-06-19T23:12:46.411108163Z level=info msg="Path Logs" path=/var/log/grafana grafana | logger=settings t=2025-06-19T23:12:46.411113453Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins grafana | logger=settings t=2025-06-19T23:12:46.411117273Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning grafana | logger=settings t=2025-06-19T23:12:46.411120593Z level=info msg="App mode production" grafana | logger=featuremgmt t=2025-06-19T23:12:46.411613719Z level=info msg=FeatureToggles azureMonitorEnableUserAuth=true logsInfiniteScrolling=true useSessionStorageForRedirection=true grafanaconThemes=true pinNavItems=true correlations=true alertingRulePermanentlyDelete=true awsAsyncQueryCaching=true ssoSettingsSAML=true groupToNestedTableTransformation=true cloudWatchNewLabelParsing=true alertingNotificationsStepMode=true cloudWatchCrossAccountQuerying=true dashboardSceneForViewers=true dataplaneFrontendFallback=true prometheusUsesCombobox=true azureMonitorPrometheusExemplars=true transformationsRedesign=true panelMonitoring=true logsExploreTableVisualisation=true lokiQueryHints=true recoveryThreshold=true logsContextDatasourceUi=true alertingQueryAndExpressionsStepMode=true reportingUseRawTimeRange=true logsPanelControls=true newPDFRendering=true cloudWatchRoundUpEndTime=true alertingApiServer=true alertingSimplifiedRouting=true prometheusAzureOverrideAudience=true annotationPermissionUpdate=true preinstallAutoUpdate=true addFieldFromCalculationStatFunctions=true newFiltersUI=true externalCorePlugins=true kubernetesClientDashboardsFolders=true kubernetesPlaylists=true influxdbBackendMigration=true alertingRuleVersionHistoryRestore=true unifiedStorageSearchPermissionFiltering=true alertingInsights=true tlsMemcached=true unifiedRequestLog=true nestedFolders=true dashgpt=true failWrongDSUID=true publicDashboardsScene=true dashboardScene=true alertRuleRestore=true alertingUIOptimizeReducer=true pluginsDetailsRightPanel=true lokiLabelNamesQueryApi=true lokiStructuredMetadata=true alertingRuleRecoverDeleted=true newDashboardSharingComponent=true logRowsPopoverMenu=true lokiQuerySplitting=true onPremToCloudMigrations=true angularDeprecationUI=true dashboardSceneSolo=true formatString=true promQLScope=true ssoSettingsApi=true recordedQueriesMulti=true grafana | logger=sqlstore t=2025-06-19T23:12:46.411680329Z level=info msg="Connecting to DB" dbtype=sqlite3 grafana | logger=sqlstore t=2025-06-19T23:12:46.41169839Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db grafana | logger=migrator t=2025-06-19T23:12:46.413495301Z level=info msg="Locking database" grafana | logger=migrator t=2025-06-19T23:12:46.413509431Z level=info msg="Starting DB migrations" grafana | logger=migrator t=2025-06-19T23:12:46.414190989Z level=info msg="Executing migration" id="create migration_log table" grafana | logger=migrator t=2025-06-19T23:12:46.41518496Z level=info msg="Migration successfully executed" id="create migration_log table" duration=993.631µs grafana | logger=migrator t=2025-06-19T23:12:46.435558717Z level=info msg="Executing migration" id="create user table" grafana | logger=migrator t=2025-06-19T23:12:46.436811592Z level=info msg="Migration successfully executed" id="create user table" duration=1.250155ms grafana | logger=migrator t=2025-06-19T23:12:46.441620098Z level=info msg="Executing migration" id="add unique index user.login" grafana | logger=migrator t=2025-06-19T23:12:46.442777302Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=1.156714ms grafana | logger=migrator t=2025-06-19T23:12:46.44863199Z level=info msg="Executing migration" id="add unique index user.email" grafana | logger=migrator t=2025-06-19T23:12:46.449375058Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=742.558µs grafana | logger=migrator t=2025-06-19T23:12:46.453267374Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" grafana | logger=migrator t=2025-06-19T23:12:46.454307236Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=1.039862ms grafana | logger=migrator t=2025-06-19T23:12:46.458528005Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" grafana | logger=migrator t=2025-06-19T23:12:46.459511896Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=983.551µs grafana | logger=migrator t=2025-06-19T23:12:46.464637606Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" grafana | logger=migrator t=2025-06-19T23:12:46.467047864Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=2.409968ms grafana | logger=migrator t=2025-06-19T23:12:46.470280732Z level=info msg="Executing migration" id="create user table v2" grafana | logger=migrator t=2025-06-19T23:12:46.471093251Z level=info msg="Migration successfully executed" id="create user table v2" duration=813.109µs grafana | logger=migrator t=2025-06-19T23:12:46.474574712Z level=info msg="Executing migration" id="create index UQE_user_login - v2" grafana | logger=migrator t=2025-06-19T23:12:46.4752817Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=706.458µs grafana | logger=migrator t=2025-06-19T23:12:46.478792811Z level=info msg="Executing migration" id="create index UQE_user_email - v2" grafana | logger=migrator t=2025-06-19T23:12:46.479881234Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=1.088192ms grafana | logger=migrator t=2025-06-19T23:12:46.484642489Z level=info msg="Executing migration" id="copy data_source v1 to v2" grafana | logger=migrator t=2025-06-19T23:12:46.485177705Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=535.216µs grafana | logger=migrator t=2025-06-19T23:12:46.488438933Z level=info msg="Executing migration" id="Drop old table user_v1" grafana | logger=migrator t=2025-06-19T23:12:46.488898369Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=459.036µs grafana | logger=migrator t=2025-06-19T23:12:46.492056965Z level=info msg="Executing migration" id="Add column help_flags1 to user table" grafana | logger=migrator t=2025-06-19T23:12:46.493119018Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.056852ms grafana | logger=migrator t=2025-06-19T23:12:46.49846928Z level=info msg="Executing migration" id="Update user table charset" grafana | logger=migrator t=2025-06-19T23:12:46.498541061Z level=info msg="Migration successfully executed" id="Update user table charset" duration=73.011µs grafana | logger=migrator t=2025-06-19T23:12:46.502086962Z level=info msg="Executing migration" id="Add last_seen_at column to user" grafana | logger=migrator t=2025-06-19T23:12:46.503788082Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.69116ms grafana | logger=migrator t=2025-06-19T23:12:46.507358803Z level=info msg="Executing migration" id="Add missing user data" grafana | logger=migrator t=2025-06-19T23:12:46.507682727Z level=info msg="Migration successfully executed" id="Add missing user data" duration=323.824µs grafana | logger=migrator t=2025-06-19T23:12:46.51136801Z level=info msg="Executing migration" id="Add is_disabled column to user" grafana | logger=migrator t=2025-06-19T23:12:46.512471613Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.103863ms grafana | logger=migrator t=2025-06-19T23:12:46.51741899Z level=info msg="Executing migration" id="Add index user.login/user.email" grafana | logger=migrator t=2025-06-19T23:12:46.518165109Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=745.529µs grafana | logger=migrator t=2025-06-19T23:12:46.521516778Z level=info msg="Executing migration" id="Add is_service_account column to user" grafana | logger=migrator t=2025-06-19T23:12:46.522650582Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.133013ms grafana | logger=migrator t=2025-06-19T23:12:46.526689769Z level=info msg="Executing migration" id="Update is_service_account column to nullable" grafana | logger=migrator t=2025-06-19T23:12:46.536475273Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=9.786304ms grafana | logger=migrator t=2025-06-19T23:12:46.540197936Z level=info msg="Executing migration" id="Add uid column to user" grafana | logger=migrator t=2025-06-19T23:12:46.541253448Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=1.055132ms grafana | logger=migrator t=2025-06-19T23:12:46.546491099Z level=info msg="Executing migration" id="Update uid column values for users" grafana | logger=migrator t=2025-06-19T23:12:46.546718342Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=227.343µs grafana | logger=migrator t=2025-06-19T23:12:46.550178112Z level=info msg="Executing migration" id="Add unique index user_uid" grafana | logger=migrator t=2025-06-19T23:12:46.551263785Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=1.085073ms grafana | logger=migrator t=2025-06-19T23:12:46.554855656Z level=info msg="Executing migration" id="Add is_provisioned column to user" grafana | logger=migrator t=2025-06-19T23:12:46.556636247Z level=info msg="Migration successfully executed" id="Add is_provisioned column to user" duration=1.780301ms grafana | logger=migrator t=2025-06-19T23:12:46.560198169Z level=info msg="Executing migration" id="update login field with orgid to allow for multiple service accounts with same name across orgs" grafana | logger=migrator t=2025-06-19T23:12:46.560525893Z level=info msg="Migration successfully executed" id="update login field with orgid to allow for multiple service accounts with same name across orgs" duration=325.144µs grafana | logger=migrator t=2025-06-19T23:12:46.565580341Z level=info msg="Executing migration" id="update service accounts login field orgid to appear only once" grafana | logger=migrator t=2025-06-19T23:12:46.566112427Z level=info msg="Migration successfully executed" id="update service accounts login field orgid to appear only once" duration=531.066µs grafana | logger=migrator t=2025-06-19T23:12:46.569487037Z level=info msg="Executing migration" id="update login and email fields to lowercase" grafana | logger=migrator t=2025-06-19T23:12:46.570159065Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase" duration=671.318µs grafana | logger=migrator t=2025-06-19T23:12:46.573755357Z level=info msg="Executing migration" id="update login and email fields to lowercase2" grafana | logger=migrator t=2025-06-19T23:12:46.574291413Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase2" duration=535.746µs grafana | logger=migrator t=2025-06-19T23:12:46.5792292Z level=info msg="Executing migration" id="create temp user table v1-7" grafana | logger=migrator t=2025-06-19T23:12:46.580005899Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=776.709µs grafana | logger=migrator t=2025-06-19T23:12:46.583319048Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" grafana | logger=migrator t=2025-06-19T23:12:46.58438848Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=1.069112ms grafana | logger=migrator t=2025-06-19T23:12:46.587860901Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" grafana | logger=migrator t=2025-06-19T23:12:46.588938793Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=1.077432ms grafana | logger=migrator t=2025-06-19T23:12:46.593910041Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" grafana | logger=migrator t=2025-06-19T23:12:46.594620829Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=710.398µs grafana | logger=migrator t=2025-06-19T23:12:46.597747066Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" grafana | logger=migrator t=2025-06-19T23:12:46.598440754Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=693.038µs grafana | logger=migrator t=2025-06-19T23:12:46.60241785Z level=info msg="Executing migration" id="Update temp_user table charset" grafana | logger=migrator t=2025-06-19T23:12:46.60245522Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=40.81µs grafana | logger=migrator t=2025-06-19T23:12:46.606040782Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" grafana | logger=migrator t=2025-06-19T23:12:46.607067684Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=1.026442ms grafana | logger=migrator t=2025-06-19T23:12:46.611912121Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" grafana | logger=migrator t=2025-06-19T23:12:46.612876782Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=965.241µs grafana | logger=migrator t=2025-06-19T23:12:46.6161225Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" grafana | logger=migrator t=2025-06-19T23:12:46.616738717Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=616.387µs grafana | logger=migrator t=2025-06-19T23:12:46.620250468Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" grafana | logger=migrator t=2025-06-19T23:12:46.620850365Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=599.677µs grafana | logger=migrator t=2025-06-19T23:12:46.62554022Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-19T23:12:46.630315625Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=4.775165ms grafana | logger=migrator t=2025-06-19T23:12:46.633662394Z level=info msg="Executing migration" id="create temp_user v2" grafana | logger=migrator t=2025-06-19T23:12:46.634492584Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=830.06µs grafana | logger=migrator t=2025-06-19T23:12:46.637382938Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" grafana | logger=migrator t=2025-06-19T23:12:46.638076036Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=692.538µs grafana | logger=migrator t=2025-06-19T23:12:46.668716652Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" grafana | logger=migrator t=2025-06-19T23:12:46.669933256Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=1.216634ms grafana | logger=migrator t=2025-06-19T23:12:46.673419727Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" grafana | logger=migrator t=2025-06-19T23:12:46.675283009Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=1.861892ms grafana | logger=migrator t=2025-06-19T23:12:46.678254293Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" grafana | logger=migrator t=2025-06-19T23:12:46.678979002Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=724.379µs grafana | logger=migrator t=2025-06-19T23:12:46.683461404Z level=info msg="Executing migration" id="copy temp_user v1 to v2" grafana | logger=migrator t=2025-06-19T23:12:46.683868879Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=407.345µs grafana | logger=migrator t=2025-06-19T23:12:46.686145345Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" grafana | logger=migrator t=2025-06-19T23:12:46.686658851Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=512.416µs grafana | logger=migrator t=2025-06-19T23:12:46.689720077Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" grafana | logger=migrator t=2025-06-19T23:12:46.690097161Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=376.414µs grafana | logger=migrator t=2025-06-19T23:12:46.693076966Z level=info msg="Executing migration" id="create star table" grafana | logger=migrator t=2025-06-19T23:12:46.693763404Z level=info msg="Migration successfully executed" id="create star table" duration=685.798µs grafana | logger=migrator t=2025-06-19T23:12:46.698167245Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" grafana | logger=migrator t=2025-06-19T23:12:46.699604462Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=1.433587ms grafana | logger=migrator t=2025-06-19T23:12:46.703477117Z level=info msg="Executing migration" id="Add column dashboard_uid in star" grafana | logger=migrator t=2025-06-19T23:12:46.705748633Z level=info msg="Migration successfully executed" id="Add column dashboard_uid in star" duration=2.268276ms grafana | logger=migrator t=2025-06-19T23:12:46.708996911Z level=info msg="Executing migration" id="Add column org_id in star" grafana | logger=migrator t=2025-06-19T23:12:46.710442648Z level=info msg="Migration successfully executed" id="Add column org_id in star" duration=1.445807ms grafana | logger=migrator t=2025-06-19T23:12:46.715094032Z level=info msg="Executing migration" id="Add column updated in star" grafana | logger=migrator t=2025-06-19T23:12:46.717183296Z level=info msg="Migration successfully executed" id="Add column updated in star" duration=2.088054ms grafana | logger=migrator t=2025-06-19T23:12:46.720341743Z level=info msg="Executing migration" id="add index in star table on dashboard_uid, org_id and user_id columns" grafana | logger=migrator t=2025-06-19T23:12:46.721628228Z level=info msg="Migration successfully executed" id="add index in star table on dashboard_uid, org_id and user_id columns" duration=1.281145ms grafana | logger=migrator t=2025-06-19T23:12:46.725013508Z level=info msg="Executing migration" id="create org table v1" grafana | logger=migrator t=2025-06-19T23:12:46.725815617Z level=info msg="Migration successfully executed" id="create org table v1" duration=801.339µs grafana | logger=migrator t=2025-06-19T23:12:46.728844672Z level=info msg="Executing migration" id="create index UQE_org_name - v1" grafana | logger=migrator t=2025-06-19T23:12:46.729619591Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=774.569µs grafana | logger=migrator t=2025-06-19T23:12:46.734287196Z level=info msg="Executing migration" id="create org_user table v1" grafana | logger=migrator t=2025-06-19T23:12:46.735402839Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=1.115143ms grafana | logger=migrator t=2025-06-19T23:12:46.738676087Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" grafana | logger=migrator t=2025-06-19T23:12:46.739915851Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=1.238044ms grafana | logger=migrator t=2025-06-19T23:12:46.743359271Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" grafana | logger=migrator t=2025-06-19T23:12:46.744184891Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=834.72µs grafana | logger=migrator t=2025-06-19T23:12:46.747257127Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" grafana | logger=migrator t=2025-06-19T23:12:46.748055606Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=799.989µs grafana | logger=migrator t=2025-06-19T23:12:46.75271207Z level=info msg="Executing migration" id="Update org table charset" grafana | logger=migrator t=2025-06-19T23:12:46.75274289Z level=info msg="Migration successfully executed" id="Update org table charset" duration=31.18µs grafana | logger=migrator t=2025-06-19T23:12:46.756390703Z level=info msg="Executing migration" id="Update org_user table charset" grafana | logger=migrator t=2025-06-19T23:12:46.756421473Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=33.7µs grafana | logger=migrator t=2025-06-19T23:12:46.75957802Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" grafana | logger=migrator t=2025-06-19T23:12:46.759983795Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=405.265µs grafana | logger=migrator t=2025-06-19T23:12:46.763913811Z level=info msg="Executing migration" id="create dashboard table" grafana | logger=migrator t=2025-06-19T23:12:46.765186505Z level=info msg="Migration successfully executed" id="create dashboard table" duration=1.272305ms grafana | logger=migrator t=2025-06-19T23:12:46.770397126Z level=info msg="Executing migration" id="add index dashboard.account_id" grafana | logger=migrator t=2025-06-19T23:12:46.771392468Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=995.312µs grafana | logger=migrator t=2025-06-19T23:12:46.774569715Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" grafana | logger=migrator t=2025-06-19T23:12:46.775546526Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=971.412µs grafana | logger=migrator t=2025-06-19T23:12:46.779039847Z level=info msg="Executing migration" id="create dashboard_tag table" grafana | logger=migrator t=2025-06-19T23:12:46.780107569Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=1.067832ms grafana | logger=migrator t=2025-06-19T23:12:46.783443548Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" grafana | logger=migrator t=2025-06-19T23:12:46.784346078Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=903.58µs grafana | logger=migrator t=2025-06-19T23:12:46.788727649Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" grafana | logger=migrator t=2025-06-19T23:12:46.789517459Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=789.05µs grafana | logger=migrator t=2025-06-19T23:12:46.792792127Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" grafana | logger=migrator t=2025-06-19T23:12:46.797897857Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=5.10516ms grafana | logger=migrator t=2025-06-19T23:12:46.801418667Z level=info msg="Executing migration" id="create dashboard v2" grafana | logger=migrator t=2025-06-19T23:12:46.802228697Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=810.12µs grafana | logger=migrator t=2025-06-19T23:12:46.806547187Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" grafana | logger=migrator t=2025-06-19T23:12:46.807396617Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=849.27µs grafana | logger=migrator t=2025-06-19T23:12:46.810544754Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" grafana | logger=migrator t=2025-06-19T23:12:46.811441714Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=898.1µs grafana | logger=migrator t=2025-06-19T23:12:46.814472619Z level=info msg="Executing migration" id="copy dashboard v1 to v2" grafana | logger=migrator t=2025-06-19T23:12:46.814863264Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=390.625µs grafana | logger=migrator t=2025-06-19T23:12:46.819504568Z level=info msg="Executing migration" id="drop table dashboard_v1" grafana | logger=migrator t=2025-06-19T23:12:46.820784633Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=1.280295ms grafana | logger=migrator t=2025-06-19T23:12:46.824142542Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" grafana | logger=migrator t=2025-06-19T23:12:46.824175272Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=33.14µs grafana | logger=migrator t=2025-06-19T23:12:46.827714943Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" grafana | logger=migrator t=2025-06-19T23:12:46.829576245Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=1.860982ms grafana | logger=migrator t=2025-06-19T23:12:46.834181529Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" grafana | logger=migrator t=2025-06-19T23:12:46.836059171Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.867092ms grafana | logger=migrator t=2025-06-19T23:12:46.839268118Z level=info msg="Executing migration" id="Add column gnetId in dashboard" grafana | logger=migrator t=2025-06-19T23:12:46.84112361Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.853742ms grafana | logger=migrator t=2025-06-19T23:12:46.844066284Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" grafana | logger=migrator t=2025-06-19T23:12:46.844826223Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=759.079µs grafana | logger=migrator t=2025-06-19T23:12:46.84887879Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" grafana | logger=migrator t=2025-06-19T23:12:46.850801142Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.920982ms grafana | logger=migrator t=2025-06-19T23:12:46.853905858Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" grafana | logger=migrator t=2025-06-19T23:12:46.854658937Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=752.279µs grafana | logger=migrator t=2025-06-19T23:12:46.859442083Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" grafana | logger=migrator t=2025-06-19T23:12:46.860663737Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=1.226704ms grafana | logger=migrator t=2025-06-19T23:12:46.864273899Z level=info msg="Executing migration" id="Update dashboard table charset" grafana | logger=migrator t=2025-06-19T23:12:46.86431004Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=37.001µs grafana | logger=migrator t=2025-06-19T23:12:46.8677583Z level=info msg="Executing migration" id="Update dashboard_tag table charset" grafana | logger=migrator t=2025-06-19T23:12:46.86778403Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=26.34µs grafana | logger=migrator t=2025-06-19T23:12:46.87037625Z level=info msg="Executing migration" id="Add column folder_id in dashboard" grafana | logger=migrator t=2025-06-19T23:12:46.872292893Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=1.917522ms grafana | logger=migrator t=2025-06-19T23:12:46.877324041Z level=info msg="Executing migration" id="Add column isFolder in dashboard" grafana | logger=migrator t=2025-06-19T23:12:46.879275154Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=1.950733ms grafana | logger=migrator t=2025-06-19T23:12:46.910500597Z level=info msg="Executing migration" id="Add column has_acl in dashboard" grafana | logger=migrator t=2025-06-19T23:12:46.914683196Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=4.181619ms grafana | logger=migrator t=2025-06-19T23:12:46.918191797Z level=info msg="Executing migration" id="Add column uid in dashboard" grafana | logger=migrator t=2025-06-19T23:12:46.921248742Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=3.056165ms grafana | logger=migrator t=2025-06-19T23:12:46.925996378Z level=info msg="Executing migration" id="Update uid column values in dashboard" grafana | logger=migrator t=2025-06-19T23:12:46.926235231Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=235.763µs grafana | logger=migrator t=2025-06-19T23:12:46.929674281Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" grafana | logger=migrator t=2025-06-19T23:12:46.93048325Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=808.989µs grafana | logger=migrator t=2025-06-19T23:12:46.934084752Z level=info msg="Executing migration" id="Remove unique index org_id_slug" grafana | logger=migrator t=2025-06-19T23:12:46.935233435Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=1.148413ms grafana | logger=migrator t=2025-06-19T23:12:46.93995086Z level=info msg="Executing migration" id="Update dashboard title length" grafana | logger=migrator t=2025-06-19T23:12:46.939988141Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=38.451µs grafana | logger=migrator t=2025-06-19T23:12:46.943791555Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" grafana | logger=migrator t=2025-06-19T23:12:46.945136501Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=1.344546ms grafana | logger=migrator t=2025-06-19T23:12:46.948789643Z level=info msg="Executing migration" id="create dashboard_provisioning" grafana | logger=migrator t=2025-06-19T23:12:46.949525292Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=735.059µs grafana | logger=migrator t=2025-06-19T23:12:46.95367892Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-19T23:12:46.959147724Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=5.465224ms grafana | logger=migrator t=2025-06-19T23:12:46.962599134Z level=info msg="Executing migration" id="create dashboard_provisioning v2" grafana | logger=migrator t=2025-06-19T23:12:46.963389173Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=789.709µs grafana | logger=migrator t=2025-06-19T23:12:46.967060866Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" grafana | logger=migrator t=2025-06-19T23:12:46.968190339Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=1.129523ms grafana | logger=migrator t=2025-06-19T23:12:46.972228116Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" grafana | logger=migrator t=2025-06-19T23:12:46.973559812Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=1.331256ms grafana | logger=migrator t=2025-06-19T23:12:46.978074484Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" grafana | logger=migrator t=2025-06-19T23:12:46.978406588Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=331.544µs grafana | logger=migrator t=2025-06-19T23:12:46.981814578Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" grafana | logger=migrator t=2025-06-19T23:12:46.982379335Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=564.146µs grafana | logger=migrator t=2025-06-19T23:12:46.985973936Z level=info msg="Executing migration" id="Add check_sum column" grafana | logger=migrator t=2025-06-19T23:12:46.989355946Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=3.38306ms grafana | logger=migrator t=2025-06-19T23:12:46.993669816Z level=info msg="Executing migration" id="Add index for dashboard_title" grafana | logger=migrator t=2025-06-19T23:12:46.994442735Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=772.579µs grafana | logger=migrator t=2025-06-19T23:12:46.997923475Z level=info msg="Executing migration" id="delete tags for deleted dashboards" grafana | logger=migrator t=2025-06-19T23:12:46.998166158Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=242.213µs grafana | logger=migrator t=2025-06-19T23:12:47.002298826Z level=info msg="Executing migration" id="delete stars for deleted dashboards" grafana | logger=migrator t=2025-06-19T23:12:47.002711931Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=412.595µs grafana | logger=migrator t=2025-06-19T23:12:47.008357966Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" grafana | logger=migrator t=2025-06-19T23:12:47.009502269Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=1.144103ms grafana | logger=migrator t=2025-06-19T23:12:47.0130743Z level=info msg="Executing migration" id="Add isPublic for dashboard" grafana | logger=migrator t=2025-06-19T23:12:47.01660673Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=3.531881ms grafana | logger=migrator t=2025-06-19T23:12:47.02010419Z level=info msg="Executing migration" id="Add deleted for dashboard" grafana | logger=migrator t=2025-06-19T23:12:47.023139905Z level=info msg="Migration successfully executed" id="Add deleted for dashboard" duration=3.035345ms grafana | logger=migrator t=2025-06-19T23:12:47.027647566Z level=info msg="Executing migration" id="Add index for deleted" grafana | logger=migrator t=2025-06-19T23:12:47.028467946Z level=info msg="Migration successfully executed" id="Add index for deleted" duration=819.77µs grafana | logger=migrator t=2025-06-19T23:12:47.031894365Z level=info msg="Executing migration" id="Add column dashboard_uid in dashboard_tag" grafana | logger=migrator t=2025-06-19T23:12:47.034212881Z level=info msg="Migration successfully executed" id="Add column dashboard_uid in dashboard_tag" duration=2.314036ms grafana | logger=migrator t=2025-06-19T23:12:47.037832573Z level=info msg="Executing migration" id="Add column org_id in dashboard_tag" grafana | logger=migrator t=2025-06-19T23:12:47.040041438Z level=info msg="Migration successfully executed" id="Add column org_id in dashboard_tag" duration=2.208715ms grafana | logger=migrator t=2025-06-19T23:12:47.044510689Z level=info msg="Executing migration" id="Add missing dashboard_uid and org_id to dashboard_tag" grafana | logger=migrator t=2025-06-19T23:12:47.044895924Z level=info msg="Migration successfully executed" id="Add missing dashboard_uid and org_id to dashboard_tag" duration=385.575µs grafana | logger=migrator t=2025-06-19T23:12:47.051384428Z level=info msg="Executing migration" id="Add apiVersion for dashboard" grafana | logger=migrator t=2025-06-19T23:12:47.057571989Z level=info msg="Migration successfully executed" id="Add apiVersion for dashboard" duration=6.185691ms grafana | logger=migrator t=2025-06-19T23:12:47.064176984Z level=info msg="Executing migration" id="Add index for dashboard_uid on dashboard_tag table" grafana | logger=migrator t=2025-06-19T23:12:47.065116535Z level=info msg="Migration successfully executed" id="Add index for dashboard_uid on dashboard_tag table" duration=941.551µs grafana | logger=migrator t=2025-06-19T23:12:47.069148131Z level=info msg="Executing migration" id="Add missing dashboard_uid and org_id to star" grafana | logger=migrator t=2025-06-19T23:12:47.069627647Z level=info msg="Migration successfully executed" id="Add missing dashboard_uid and org_id to star" duration=479.496µs grafana | logger=migrator t=2025-06-19T23:12:47.074878437Z level=info msg="Executing migration" id="create data_source table" grafana | logger=migrator t=2025-06-19T23:12:47.076181162Z level=info msg="Migration successfully executed" id="create data_source table" duration=1.302445ms grafana | logger=migrator t=2025-06-19T23:12:47.079244807Z level=info msg="Executing migration" id="add index data_source.account_id" grafana | logger=migrator t=2025-06-19T23:12:47.080164397Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=918.68µs grafana | logger=migrator t=2025-06-19T23:12:47.085297166Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" grafana | logger=migrator t=2025-06-19T23:12:47.086158406Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=861.3µs grafana | logger=migrator t=2025-06-19T23:12:47.090139631Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" grafana | logger=migrator t=2025-06-19T23:12:47.090974181Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=839.53µs grafana | logger=migrator t=2025-06-19T23:12:47.093819154Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" grafana | logger=migrator t=2025-06-19T23:12:47.094570402Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=751.208µs grafana | logger=migrator t=2025-06-19T23:12:47.100313248Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" grafana | logger=migrator t=2025-06-19T23:12:47.107130596Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=6.818618ms grafana | logger=migrator t=2025-06-19T23:12:47.110758887Z level=info msg="Executing migration" id="create data_source table v2" grafana | logger=migrator t=2025-06-19T23:12:47.111417625Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=658.898µs grafana | logger=migrator t=2025-06-19T23:12:47.115703024Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" grafana | logger=migrator t=2025-06-19T23:12:47.116286781Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=583.747µs grafana | logger=migrator t=2025-06-19T23:12:47.121815564Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" grafana | logger=migrator t=2025-06-19T23:12:47.122744125Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=929.511µs grafana | logger=migrator t=2025-06-19T23:12:47.151403043Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" grafana | logger=migrator t=2025-06-19T23:12:47.15202991Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=627.107µs grafana | logger=migrator t=2025-06-19T23:12:47.156697143Z level=info msg="Executing migration" id="Add column with_credentials" grafana | logger=migrator t=2025-06-19T23:12:47.159207062Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=2.509389ms grafana | logger=migrator t=2025-06-19T23:12:47.162199336Z level=info msg="Executing migration" id="Add secure json data column" grafana | logger=migrator t=2025-06-19T23:12:47.164630244Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.430358ms grafana | logger=migrator t=2025-06-19T23:12:47.170918126Z level=info msg="Executing migration" id="Update data_source table charset" grafana | logger=migrator t=2025-06-19T23:12:47.171015997Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=101.911µs grafana | logger=migrator t=2025-06-19T23:12:47.176738432Z level=info msg="Executing migration" id="Update initial version to 1" grafana | logger=migrator t=2025-06-19T23:12:47.176966235Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=229.143µs grafana | logger=migrator t=2025-06-19T23:12:47.180977821Z level=info msg="Executing migration" id="Add read_only data column" grafana | logger=migrator t=2025-06-19T23:12:47.183440879Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=2.463068ms grafana | logger=migrator t=2025-06-19T23:12:47.187934801Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" grafana | logger=migrator t=2025-06-19T23:12:47.188131073Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=196.632µs grafana | logger=migrator t=2025-06-19T23:12:47.191014036Z level=info msg="Executing migration" id="Update json_data with nulls" grafana | logger=migrator t=2025-06-19T23:12:47.191192588Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=179.102µs grafana | logger=migrator t=2025-06-19T23:12:47.193280232Z level=info msg="Executing migration" id="Add uid column" grafana | logger=migrator t=2025-06-19T23:12:47.195691209Z level=info msg="Migration successfully executed" id="Add uid column" duration=2.410577ms grafana | logger=migrator t=2025-06-19T23:12:47.20007928Z level=info msg="Executing migration" id="Update uid value" grafana | logger=migrator t=2025-06-19T23:12:47.201242413Z level=info msg="Migration successfully executed" id="Update uid value" duration=1.163083ms grafana | logger=migrator t=2025-06-19T23:12:47.203922774Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" grafana | logger=migrator t=2025-06-19T23:12:47.204678272Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=757.038µs grafana | logger=migrator t=2025-06-19T23:12:47.207464904Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" grafana | logger=migrator t=2025-06-19T23:12:47.208211073Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=746.059µs grafana | logger=migrator t=2025-06-19T23:12:47.211190187Z level=info msg="Executing migration" id="Add is_prunable column" grafana | logger=migrator t=2025-06-19T23:12:47.213798707Z level=info msg="Migration successfully executed" id="Add is_prunable column" duration=2.60773ms grafana | logger=migrator t=2025-06-19T23:12:47.217845563Z level=info msg="Executing migration" id="Add api_version column" grafana | logger=migrator t=2025-06-19T23:12:47.220284321Z level=info msg="Migration successfully executed" id="Add api_version column" duration=2.437938ms grafana | logger=migrator t=2025-06-19T23:12:47.223302275Z level=info msg="Executing migration" id="Update secure_json_data column to MediumText" grafana | logger=migrator t=2025-06-19T23:12:47.223319246Z level=info msg="Migration successfully executed" id="Update secure_json_data column to MediumText" duration=17.411µs grafana | logger=migrator t=2025-06-19T23:12:47.225603202Z level=info msg="Executing migration" id="create api_key table" grafana | logger=migrator t=2025-06-19T23:12:47.22631936Z level=info msg="Migration successfully executed" id="create api_key table" duration=715.568µs grafana | logger=migrator t=2025-06-19T23:12:47.232120016Z level=info msg="Executing migration" id="add index api_key.account_id" grafana | logger=migrator t=2025-06-19T23:12:47.233267099Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=1.146533ms grafana | logger=migrator t=2025-06-19T23:12:47.236478616Z level=info msg="Executing migration" id="add index api_key.key" grafana | logger=migrator t=2025-06-19T23:12:47.237642919Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=1.176353ms grafana | logger=migrator t=2025-06-19T23:12:47.240950867Z level=info msg="Executing migration" id="add index api_key.account_id_name" grafana | logger=migrator t=2025-06-19T23:12:47.241796087Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=844.65µs grafana | logger=migrator t=2025-06-19T23:12:47.246663713Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" grafana | logger=migrator t=2025-06-19T23:12:47.247462822Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=797.549µs grafana | logger=migrator t=2025-06-19T23:12:47.251534288Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" grafana | logger=migrator t=2025-06-19T23:12:47.252779083Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=1.254925ms grafana | logger=migrator t=2025-06-19T23:12:47.256090351Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" grafana | logger=migrator t=2025-06-19T23:12:47.256800119Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=709.578µs grafana | logger=migrator t=2025-06-19T23:12:47.261657354Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" grafana | logger=migrator t=2025-06-19T23:12:47.268641574Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=6.98361ms grafana | logger=migrator t=2025-06-19T23:12:47.272785622Z level=info msg="Executing migration" id="create api_key table v2" grafana | logger=migrator t=2025-06-19T23:12:47.27348265Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=696.658µs grafana | logger=migrator t=2025-06-19T23:12:47.278235494Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" grafana | logger=migrator t=2025-06-19T23:12:47.279004473Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=769.299µs grafana | logger=migrator t=2025-06-19T23:12:47.281948627Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" grafana | logger=migrator t=2025-06-19T23:12:47.282683745Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=735.149µs grafana | logger=migrator t=2025-06-19T23:12:47.285635249Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" grafana | logger=migrator t=2025-06-19T23:12:47.286421288Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=785.849µs grafana | logger=migrator t=2025-06-19T23:12:47.291272453Z level=info msg="Executing migration" id="copy api_key v1 to v2" grafana | logger=migrator t=2025-06-19T23:12:47.291577657Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=305.394µs grafana | logger=migrator t=2025-06-19T23:12:47.295438911Z level=info msg="Executing migration" id="Drop old table api_key_v1" grafana | logger=migrator t=2025-06-19T23:12:47.296581504Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=1.145783ms grafana | logger=migrator t=2025-06-19T23:12:47.299507287Z level=info msg="Executing migration" id="Update api_key table charset" grafana | logger=migrator t=2025-06-19T23:12:47.299533108Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=26.261µs grafana | logger=migrator t=2025-06-19T23:12:47.301861934Z level=info msg="Executing migration" id="Add expires to api_key table" grafana | logger=migrator t=2025-06-19T23:12:47.304582406Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=2.720172ms grafana | logger=migrator t=2025-06-19T23:12:47.308743793Z level=info msg="Executing migration" id="Add service account foreign key" grafana | logger=migrator t=2025-06-19T23:12:47.311331173Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.58689ms grafana | logger=migrator t=2025-06-19T23:12:47.31457705Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" grafana | logger=migrator t=2025-06-19T23:12:47.314741512Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=164.852µs grafana | logger=migrator t=2025-06-19T23:12:47.317419482Z level=info msg="Executing migration" id="Add last_used_at to api_key table" grafana | logger=migrator t=2025-06-19T23:12:47.320074453Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=2.654471ms grafana | logger=migrator t=2025-06-19T23:12:47.324406552Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" grafana | logger=migrator t=2025-06-19T23:12:47.327033842Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.62745ms grafana | logger=migrator t=2025-06-19T23:12:47.329730223Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" grafana | logger=migrator t=2025-06-19T23:12:47.330443961Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=714.048µs grafana | logger=migrator t=2025-06-19T23:12:47.333226093Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" grafana | logger=migrator t=2025-06-19T23:12:47.33377843Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=552.157µs grafana | logger=migrator t=2025-06-19T23:12:47.338391992Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" grafana | logger=migrator t=2025-06-19T23:12:47.339226492Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=834.15µs grafana | logger=migrator t=2025-06-19T23:12:47.341908873Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" grafana | logger=migrator t=2025-06-19T23:12:47.342738142Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=818.089µs grafana | logger=migrator t=2025-06-19T23:12:47.345597795Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" grafana | logger=migrator t=2025-06-19T23:12:47.346384234Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=783.069µs grafana | logger=migrator t=2025-06-19T23:12:47.350775404Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" grafana | logger=migrator t=2025-06-19T23:12:47.351562363Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=786.599µs grafana | logger=migrator t=2025-06-19T23:12:47.354467526Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" grafana | logger=migrator t=2025-06-19T23:12:47.354485307Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=18.261µs grafana | logger=migrator t=2025-06-19T23:12:47.357143527Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" grafana | logger=migrator t=2025-06-19T23:12:47.357164327Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=20.65µs grafana | logger=migrator t=2025-06-19T23:12:47.35998063Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" grafana | logger=migrator t=2025-06-19T23:12:47.362769811Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=2.789212ms grafana | logger=migrator t=2025-06-19T23:12:47.391421969Z level=info msg="Executing migration" id="Add encrypted dashboard json column" grafana | logger=migrator t=2025-06-19T23:12:47.394172021Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.749132ms grafana | logger=migrator t=2025-06-19T23:12:47.396965773Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" grafana | logger=migrator t=2025-06-19T23:12:47.396981553Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=16.48µs grafana | logger=migrator t=2025-06-19T23:12:47.399820945Z level=info msg="Executing migration" id="create quota table v1" grafana | logger=migrator t=2025-06-19T23:12:47.400488783Z level=info msg="Migration successfully executed" id="create quota table v1" duration=667.378µs grafana | logger=migrator t=2025-06-19T23:12:47.405093726Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" grafana | logger=migrator t=2025-06-19T23:12:47.405889235Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=795.169µs grafana | logger=migrator t=2025-06-19T23:12:47.408612136Z level=info msg="Executing migration" id="Update quota table charset" grafana | logger=migrator t=2025-06-19T23:12:47.408634426Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=20.76µs grafana | logger=migrator t=2025-06-19T23:12:47.411489339Z level=info msg="Executing migration" id="create plugin_setting table" grafana | logger=migrator t=2025-06-19T23:12:47.412247568Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=757.789µs grafana | logger=migrator t=2025-06-19T23:12:47.417050353Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" grafana | logger=migrator t=2025-06-19T23:12:47.417843732Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=792.639µs grafana | logger=migrator t=2025-06-19T23:12:47.420830226Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" grafana | logger=migrator t=2025-06-19T23:12:47.423878931Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=3.048125ms grafana | logger=migrator t=2025-06-19T23:12:47.426670623Z level=info msg="Executing migration" id="Update plugin_setting table charset" grafana | logger=migrator t=2025-06-19T23:12:47.426699633Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=29.41µs grafana | logger=migrator t=2025-06-19T23:12:47.429581266Z level=info msg="Executing migration" id="update NULL org_id to 1" grafana | logger=migrator t=2025-06-19T23:12:47.429877679Z level=info msg="Migration successfully executed" id="update NULL org_id to 1" duration=298.043µs grafana | logger=migrator t=2025-06-19T23:12:47.433921036Z level=info msg="Executing migration" id="make org_id NOT NULL and DEFAULT VALUE 1" grafana | logger=migrator t=2025-06-19T23:12:47.444123352Z level=info msg="Migration successfully executed" id="make org_id NOT NULL and DEFAULT VALUE 1" duration=10.201616ms grafana | logger=migrator t=2025-06-19T23:12:47.447239738Z level=info msg="Executing migration" id="create session table" grafana | logger=migrator t=2025-06-19T23:12:47.448450232Z level=info msg="Migration successfully executed" id="create session table" duration=1.209424ms grafana | logger=migrator t=2025-06-19T23:12:47.454490501Z level=info msg="Executing migration" id="Drop old table playlist table" grafana | logger=migrator t=2025-06-19T23:12:47.454623143Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=133.102µs grafana | logger=migrator t=2025-06-19T23:12:47.457720078Z level=info msg="Executing migration" id="Drop old table playlist_item table" grafana | logger=migrator t=2025-06-19T23:12:47.457802779Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=82.811µs grafana | logger=migrator t=2025-06-19T23:12:47.460720122Z level=info msg="Executing migration" id="create playlist table v2" grafana | logger=migrator t=2025-06-19T23:12:47.46137578Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=655.418µs grafana | logger=migrator t=2025-06-19T23:12:47.464128481Z level=info msg="Executing migration" id="create playlist item table v2" grafana | logger=migrator t=2025-06-19T23:12:47.465102953Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=972.681µs grafana | logger=migrator t=2025-06-19T23:12:47.47096086Z level=info msg="Executing migration" id="Update playlist table charset" grafana | logger=migrator t=2025-06-19T23:12:47.47099698Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=36.84µs grafana | logger=migrator t=2025-06-19T23:12:47.474403539Z level=info msg="Executing migration" id="Update playlist_item table charset" grafana | logger=migrator t=2025-06-19T23:12:47.474440679Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=38.29µs grafana | logger=migrator t=2025-06-19T23:12:47.47803342Z level=info msg="Executing migration" id="Add playlist column created_at" grafana | logger=migrator t=2025-06-19T23:12:47.483042068Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=5.007778ms grafana | logger=migrator t=2025-06-19T23:12:47.486060622Z level=info msg="Executing migration" id="Add playlist column updated_at" grafana | logger=migrator t=2025-06-19T23:12:47.489126097Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=3.064555ms grafana | logger=migrator t=2025-06-19T23:12:47.492176192Z level=info msg="Executing migration" id="drop preferences table v2" grafana | logger=migrator t=2025-06-19T23:12:47.492255283Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=79.131µs grafana | logger=migrator t=2025-06-19T23:12:47.496614123Z level=info msg="Executing migration" id="drop preferences table v3" grafana | logger=migrator t=2025-06-19T23:12:47.496688274Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=74.301µs grafana | logger=migrator t=2025-06-19T23:12:47.499567777Z level=info msg="Executing migration" id="create preferences table v3" grafana | logger=migrator t=2025-06-19T23:12:47.500772621Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=1.204324ms grafana | logger=migrator t=2025-06-19T23:12:47.504108789Z level=info msg="Executing migration" id="Update preferences table charset" grafana | logger=migrator t=2025-06-19T23:12:47.504143529Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=35.78µs grafana | logger=migrator t=2025-06-19T23:12:47.507117483Z level=info msg="Executing migration" id="Add column team_id in preferences" grafana | logger=migrator t=2025-06-19T23:12:47.510415211Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=3.298538ms grafana | logger=migrator t=2025-06-19T23:12:47.514506758Z level=info msg="Executing migration" id="Update team_id column values in preferences" grafana | logger=migrator t=2025-06-19T23:12:47.51464982Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=143.122µs grafana | logger=migrator t=2025-06-19T23:12:47.517412691Z level=info msg="Executing migration" id="Add column week_start in preferences" grafana | logger=migrator t=2025-06-19T23:12:47.520594888Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=3.181557ms grafana | logger=migrator t=2025-06-19T23:12:47.523310529Z level=info msg="Executing migration" id="Add column preferences.json_data" grafana | logger=migrator t=2025-06-19T23:12:47.526497005Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=3.185926ms grafana | logger=migrator t=2025-06-19T23:12:47.530796034Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" grafana | logger=migrator t=2025-06-19T23:12:47.530815265Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=19.841µs grafana | logger=migrator t=2025-06-19T23:12:47.533410754Z level=info msg="Executing migration" id="Add preferences index org_id" grafana | logger=migrator t=2025-06-19T23:12:47.534206533Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=794.379µs grafana | logger=migrator t=2025-06-19T23:12:47.537143877Z level=info msg="Executing migration" id="Add preferences index user_id" grafana | logger=migrator t=2025-06-19T23:12:47.538032987Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=885.58µs grafana | logger=migrator t=2025-06-19T23:12:47.542551399Z level=info msg="Executing migration" id="create alert table v1" grafana | logger=migrator t=2025-06-19T23:12:47.54354144Z level=info msg="Migration successfully executed" id="create alert table v1" duration=989.781µs grafana | logger=migrator t=2025-06-19T23:12:47.546214051Z level=info msg="Executing migration" id="add index alert org_id & id " grafana | logger=migrator t=2025-06-19T23:12:47.54701177Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=797.299µs grafana | logger=migrator t=2025-06-19T23:12:47.549528389Z level=info msg="Executing migration" id="add index alert state" grafana | logger=migrator t=2025-06-19T23:12:47.550299968Z level=info msg="Migration successfully executed" id="add index alert state" duration=771.148µs grafana | logger=migrator t=2025-06-19T23:12:47.554760969Z level=info msg="Executing migration" id="add index alert dashboard_id" grafana | logger=migrator t=2025-06-19T23:12:47.555551768Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=790.639µs grafana | logger=migrator t=2025-06-19T23:12:47.558272919Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" grafana | logger=migrator t=2025-06-19T23:12:47.558926236Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=652.877µs grafana | logger=migrator t=2025-06-19T23:12:47.562028272Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" grafana | logger=migrator t=2025-06-19T23:12:47.562817441Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=788.879µs grafana | logger=migrator t=2025-06-19T23:12:47.566945298Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" grafana | logger=migrator t=2025-06-19T23:12:47.567706737Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=761.049µs grafana | logger=migrator t=2025-06-19T23:12:47.569827941Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" grafana | logger=migrator t=2025-06-19T23:12:47.579471571Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=9.64363ms grafana | logger=migrator t=2025-06-19T23:12:47.582209863Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" grafana | logger=migrator t=2025-06-19T23:12:47.582680648Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=470.095µs grafana | logger=migrator t=2025-06-19T23:12:47.586875716Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" grafana | logger=migrator t=2025-06-19T23:12:47.587584844Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=706.578µs grafana | logger=migrator t=2025-06-19T23:12:47.59068239Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" grafana | logger=migrator t=2025-06-19T23:12:47.591114235Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=434.335µs grafana | logger=migrator t=2025-06-19T23:12:47.594043148Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" grafana | logger=migrator t=2025-06-19T23:12:47.594809427Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=765.759µs grafana | logger=migrator t=2025-06-19T23:12:47.599342929Z level=info msg="Executing migration" id="create alert_notification table v1" grafana | logger=migrator t=2025-06-19T23:12:47.600040647Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=697.338µs grafana | logger=migrator t=2025-06-19T23:12:47.602210682Z level=info msg="Executing migration" id="Add column is_default" grafana | logger=migrator t=2025-06-19T23:12:47.606066016Z level=info msg="Migration successfully executed" id="Add column is_default" duration=3.854904ms grafana | logger=migrator t=2025-06-19T23:12:47.617653628Z level=info msg="Executing migration" id="Add column frequency" grafana | logger=migrator t=2025-06-19T23:12:47.621248179Z level=info msg="Migration successfully executed" id="Add column frequency" duration=3.594261ms grafana | logger=migrator t=2025-06-19T23:12:47.625504558Z level=info msg="Executing migration" id="Add column send_reminder" grafana | logger=migrator t=2025-06-19T23:12:47.629319802Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.813864ms grafana | logger=migrator t=2025-06-19T23:12:47.632087623Z level=info msg="Executing migration" id="Add column disable_resolve_message" grafana | logger=migrator t=2025-06-19T23:12:47.635667754Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=3.579521ms grafana | logger=migrator t=2025-06-19T23:12:47.638329245Z level=info msg="Executing migration" id="add index alert_notification org_id & name" grafana | logger=migrator t=2025-06-19T23:12:47.639168835Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=839.04µs grafana | logger=migrator t=2025-06-19T23:12:47.643067439Z level=info msg="Executing migration" id="Update alert table charset" grafana | logger=migrator t=2025-06-19T23:12:47.64309954Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=32.82µs grafana | logger=migrator t=2025-06-19T23:12:47.648166007Z level=info msg="Executing migration" id="Update alert_notification table charset" grafana | logger=migrator t=2025-06-19T23:12:47.648189478Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=24.101µs grafana | logger=migrator t=2025-06-19T23:12:47.650210611Z level=info msg="Executing migration" id="create notification_journal table v1" grafana | logger=migrator t=2025-06-19T23:12:47.6509852Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=774.199µs grafana | logger=migrator t=2025-06-19T23:12:47.653850053Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" grafana | logger=migrator t=2025-06-19T23:12:47.654731903Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=881.28µs grafana | logger=migrator t=2025-06-19T23:12:47.659861101Z level=info msg="Executing migration" id="drop alert_notification_journal" grafana | logger=migrator t=2025-06-19T23:12:47.660552039Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=687.248µs grafana | logger=migrator t=2025-06-19T23:12:47.665172772Z level=info msg="Executing migration" id="create alert_notification_state table v1" grafana | logger=migrator t=2025-06-19T23:12:47.665930471Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=757.139µs grafana | logger=migrator t=2025-06-19T23:12:47.67019436Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" grafana | logger=migrator t=2025-06-19T23:12:47.67110135Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=906.36µs grafana | logger=migrator t=2025-06-19T23:12:47.673870882Z level=info msg="Executing migration" id="Add for to alert table" grafana | logger=migrator t=2025-06-19T23:12:47.677627875Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=3.756403ms grafana | logger=migrator t=2025-06-19T23:12:47.681049184Z level=info msg="Executing migration" id="Add column uid in alert_notification" grafana | logger=migrator t=2025-06-19T23:12:47.684865297Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=3.815293ms grafana | logger=migrator t=2025-06-19T23:12:47.688842473Z level=info msg="Executing migration" id="Update uid column values in alert_notification" grafana | logger=migrator t=2025-06-19T23:12:47.689015995Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=173.712µs grafana | logger=migrator t=2025-06-19T23:12:47.692139441Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" grafana | logger=migrator t=2025-06-19T23:12:47.69298538Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=845.219µs grafana | logger=migrator t=2025-06-19T23:12:47.695967155Z level=info msg="Executing migration" id="Remove unique index org_id_name" grafana | logger=migrator t=2025-06-19T23:12:47.696748273Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=778.378µs grafana | logger=migrator t=2025-06-19T23:12:47.700560577Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" grafana | logger=migrator t=2025-06-19T23:12:47.704264589Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=3.703372ms grafana | logger=migrator t=2025-06-19T23:12:47.707455936Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" grafana | logger=migrator t=2025-06-19T23:12:47.707471766Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=16.54µs grafana | logger=migrator t=2025-06-19T23:12:47.710096096Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" grafana | logger=migrator t=2025-06-19T23:12:47.710924096Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=827.01µs grafana | logger=migrator t=2025-06-19T23:12:47.715783011Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" grafana | logger=migrator t=2025-06-19T23:12:47.717121037Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=1.337856ms grafana | logger=migrator t=2025-06-19T23:12:47.720618637Z level=info msg="Executing migration" id="Drop old annotation table v4" grafana | logger=migrator t=2025-06-19T23:12:47.720744688Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=123.501µs grafana | logger=migrator t=2025-06-19T23:12:47.724083926Z level=info msg="Executing migration" id="create annotation table v5" grafana | logger=migrator t=2025-06-19T23:12:47.724959486Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=877.65µs grafana | logger=migrator t=2025-06-19T23:12:47.728842011Z level=info msg="Executing migration" id="add index annotation 0 v3" grafana | logger=migrator t=2025-06-19T23:12:47.729718741Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=858.99µs grafana | logger=migrator t=2025-06-19T23:12:47.732756636Z level=info msg="Executing migration" id="add index annotation 1 v3" grafana | logger=migrator t=2025-06-19T23:12:47.733577425Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=819.949µs grafana | logger=migrator t=2025-06-19T23:12:47.736758011Z level=info msg="Executing migration" id="add index annotation 2 v3" grafana | logger=migrator t=2025-06-19T23:12:47.737588951Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=830.48µs grafana | logger=migrator t=2025-06-19T23:12:47.741409235Z level=info msg="Executing migration" id="add index annotation 3 v3" grafana | logger=migrator t=2025-06-19T23:12:47.742338555Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=931.38µs grafana | logger=migrator t=2025-06-19T23:12:47.745443661Z level=info msg="Executing migration" id="add index annotation 4 v3" grafana | logger=migrator t=2025-06-19T23:12:47.7462944Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=850.289µs grafana | logger=migrator t=2025-06-19T23:12:47.749377866Z level=info msg="Executing migration" id="Update annotation table charset" grafana | logger=migrator t=2025-06-19T23:12:47.749406526Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=28.89µs grafana | logger=migrator t=2025-06-19T23:12:47.753308551Z level=info msg="Executing migration" id="Add column region_id to annotation table" grafana | logger=migrator t=2025-06-19T23:12:47.757348907Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=4.039476ms grafana | logger=migrator t=2025-06-19T23:12:47.760348751Z level=info msg="Executing migration" id="Drop category_id index" grafana | logger=migrator t=2025-06-19T23:12:47.76110942Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=762.979µs grafana | logger=migrator t=2025-06-19T23:12:47.764378597Z level=info msg="Executing migration" id="Add column tags to annotation table" grafana | logger=migrator t=2025-06-19T23:12:47.768431864Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=4.052857ms grafana | logger=migrator t=2025-06-19T23:12:47.772546551Z level=info msg="Executing migration" id="Create annotation_tag table v2" grafana | logger=migrator t=2025-06-19T23:12:47.773172588Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=625.547µs grafana | logger=migrator t=2025-06-19T23:12:47.776612207Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" grafana | logger=migrator t=2025-06-19T23:12:47.777469077Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=856.43µs grafana | logger=migrator t=2025-06-19T23:12:47.781143729Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" grafana | logger=migrator t=2025-06-19T23:12:47.781954109Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=808.69µs grafana | logger=migrator t=2025-06-19T23:12:47.785820443Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" grafana | logger=migrator t=2025-06-19T23:12:47.801039667Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=15.215134ms grafana | logger=migrator t=2025-06-19T23:12:47.804157113Z level=info msg="Executing migration" id="Create annotation_tag table v3" grafana | logger=migrator t=2025-06-19T23:12:47.804663488Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=506.235µs grafana | logger=migrator t=2025-06-19T23:12:47.807665733Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" grafana | logger=migrator t=2025-06-19T23:12:47.808543403Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=877.15µs grafana | logger=migrator t=2025-06-19T23:12:47.812740581Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" grafana | logger=migrator t=2025-06-19T23:12:47.813005944Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=265.073µs grafana | logger=migrator t=2025-06-19T23:12:47.816010038Z level=info msg="Executing migration" id="drop table annotation_tag_v2" grafana | logger=migrator t=2025-06-19T23:12:47.816533474Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=522.736µs grafana | logger=migrator t=2025-06-19T23:12:47.819800792Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" grafana | logger=migrator t=2025-06-19T23:12:47.819992344Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=191.482µs grafana | logger=migrator t=2025-06-19T23:12:47.824091101Z level=info msg="Executing migration" id="Add created time to annotation table" grafana | logger=migrator t=2025-06-19T23:12:47.830933509Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=6.839258ms grafana | logger=migrator t=2025-06-19T23:12:47.862944085Z level=info msg="Executing migration" id="Add updated time to annotation table" grafana | logger=migrator t=2025-06-19T23:12:47.868742652Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=5.799377ms grafana | logger=migrator t=2025-06-19T23:12:47.871769446Z level=info msg="Executing migration" id="Add index for created in annotation table" grafana | logger=migrator t=2025-06-19T23:12:47.872598066Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=828.24µs grafana | logger=migrator t=2025-06-19T23:12:47.87554327Z level=info msg="Executing migration" id="Add index for updated in annotation table" grafana | logger=migrator t=2025-06-19T23:12:47.876392709Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=848.909µs grafana | logger=migrator t=2025-06-19T23:12:47.880341844Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" grafana | logger=migrator t=2025-06-19T23:12:47.880560027Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=217.853µs grafana | logger=migrator t=2025-06-19T23:12:47.88344303Z level=info msg="Executing migration" id="Add epoch_end column" grafana | logger=migrator t=2025-06-19T23:12:47.887603108Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=4.159428ms grafana | logger=migrator t=2025-06-19T23:12:47.890708923Z level=info msg="Executing migration" id="Add index for epoch_end" grafana | logger=migrator t=2025-06-19T23:12:47.891582393Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=872.66µs grafana | logger=migrator t=2025-06-19T23:12:47.894629808Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" grafana | logger=migrator t=2025-06-19T23:12:47.8947896Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=159.792µs grafana | logger=migrator t=2025-06-19T23:12:47.898885767Z level=info msg="Executing migration" id="Move region to single row" grafana | logger=migrator t=2025-06-19T23:12:47.899263621Z level=info msg="Migration successfully executed" id="Move region to single row" duration=377.804µs grafana | logger=migrator t=2025-06-19T23:12:47.902382897Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" grafana | logger=migrator t=2025-06-19T23:12:47.903213816Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=830.589µs grafana | logger=migrator t=2025-06-19T23:12:47.906076869Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" grafana | logger=migrator t=2025-06-19T23:12:47.906973759Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=896.6µs grafana | logger=migrator t=2025-06-19T23:12:47.910877654Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" grafana | logger=migrator t=2025-06-19T23:12:47.911796604Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=917.54µs grafana | logger=migrator t=2025-06-19T23:12:47.915057882Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" grafana | logger=migrator t=2025-06-19T23:12:47.915925242Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=866.89µs grafana | logger=migrator t=2025-06-19T23:12:47.919054688Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" grafana | logger=migrator t=2025-06-19T23:12:47.919865917Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=811.049µs grafana | logger=migrator t=2025-06-19T23:12:47.924624701Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" grafana | logger=migrator t=2025-06-19T23:12:47.925938176Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=1.316425ms grafana | logger=migrator t=2025-06-19T23:12:47.929178133Z level=info msg="Executing migration" id="Increase tags column to length 4096" grafana | logger=migrator t=2025-06-19T23:12:47.929196684Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=22.051µs grafana | logger=migrator t=2025-06-19T23:12:47.932191658Z level=info msg="Executing migration" id="Increase prev_state column to length 40 not null" grafana | logger=migrator t=2025-06-19T23:12:47.932209798Z level=info msg="Migration successfully executed" id="Increase prev_state column to length 40 not null" duration=18.34µs grafana | logger=migrator t=2025-06-19T23:12:47.936095943Z level=info msg="Executing migration" id="Increase new_state column to length 40 not null" grafana | logger=migrator t=2025-06-19T23:12:47.936113533Z level=info msg="Migration successfully executed" id="Increase new_state column to length 40 not null" duration=18.17µs grafana | logger=migrator t=2025-06-19T23:12:47.939302269Z level=info msg="Executing migration" id="create test_data table" grafana | logger=migrator t=2025-06-19T23:12:47.940502763Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.207524ms grafana | logger=migrator t=2025-06-19T23:12:47.944091914Z level=info msg="Executing migration" id="create dashboard_version table v1" grafana | logger=migrator t=2025-06-19T23:12:47.945320888Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=1.228504ms grafana | logger=migrator t=2025-06-19T23:12:47.949454735Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" grafana | logger=migrator t=2025-06-19T23:12:47.950332575Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=877.62µs grafana | logger=migrator t=2025-06-19T23:12:47.953449721Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" grafana | logger=migrator t=2025-06-19T23:12:47.954318321Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=868.2µs grafana | logger=migrator t=2025-06-19T23:12:47.957330916Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" grafana | logger=migrator t=2025-06-19T23:12:47.957511248Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=180.682µs grafana | logger=migrator t=2025-06-19T23:12:47.960509242Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" grafana | logger=migrator t=2025-06-19T23:12:47.960856646Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=346.994µs grafana | logger=migrator t=2025-06-19T23:12:47.964532038Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" grafana | logger=migrator t=2025-06-19T23:12:47.964558278Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=26.58µs grafana | logger=migrator t=2025-06-19T23:12:47.967806885Z level=info msg="Executing migration" id="Add apiVersion for dashboard_version" grafana | logger=migrator t=2025-06-19T23:12:47.975011098Z level=info msg="Migration successfully executed" id="Add apiVersion for dashboard_version" duration=7.203433ms grafana | logger=migrator t=2025-06-19T23:12:47.979313337Z level=info msg="Executing migration" id="create team table" grafana | logger=migrator t=2025-06-19T23:12:47.980705583Z level=info msg="Migration successfully executed" id="create team table" duration=1.395086ms grafana | logger=migrator t=2025-06-19T23:12:47.984963252Z level=info msg="Executing migration" id="add index team.org_id" grafana | logger=migrator t=2025-06-19T23:12:47.986330107Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.366525ms grafana | logger=migrator t=2025-06-19T23:12:47.989534254Z level=info msg="Executing migration" id="add unique index team_org_id_name" grafana | logger=migrator t=2025-06-19T23:12:47.990427064Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=892.65µs grafana | logger=migrator t=2025-06-19T23:12:47.993455219Z level=info msg="Executing migration" id="Add column uid in team" grafana | logger=migrator t=2025-06-19T23:12:47.998014671Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=4.558952ms grafana | logger=migrator t=2025-06-19T23:12:48.003308443Z level=info msg="Executing migration" id="Update uid column values in team" grafana | logger=migrator t=2025-06-19T23:12:48.003492965Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=187.742µs grafana | logger=migrator t=2025-06-19T23:12:48.005849534Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" grafana | logger=migrator t=2025-06-19T23:12:48.006689554Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=839.64µs grafana | logger=migrator t=2025-06-19T23:12:48.009929114Z level=info msg="Executing migration" id="Add column external_uid in team" grafana | logger=migrator t=2025-06-19T23:12:48.014411478Z level=info msg="Migration successfully executed" id="Add column external_uid in team" duration=4.481894ms grafana | logger=migrator t=2025-06-19T23:12:48.018232635Z level=info msg="Executing migration" id="Add column is_provisioned in team" grafana | logger=migrator t=2025-06-19T23:12:48.022697419Z level=info msg="Migration successfully executed" id="Add column is_provisioned in team" duration=4.464284ms grafana | logger=migrator t=2025-06-19T23:12:48.025979759Z level=info msg="Executing migration" id="create team member table" grafana | logger=migrator t=2025-06-19T23:12:48.026733148Z level=info msg="Migration successfully executed" id="create team member table" duration=753.019µs grafana | logger=migrator t=2025-06-19T23:12:48.03017733Z level=info msg="Executing migration" id="add index team_member.org_id" grafana | logger=migrator t=2025-06-19T23:12:48.031156152Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=977.832µs grafana | logger=migrator t=2025-06-19T23:12:48.035236522Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" grafana | logger=migrator t=2025-06-19T23:12:48.036141113Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=903.861µs grafana | logger=migrator t=2025-06-19T23:12:48.039156179Z level=info msg="Executing migration" id="add index team_member.team_id" grafana | logger=migrator t=2025-06-19T23:12:48.04004328Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=886.921µs grafana | logger=migrator t=2025-06-19T23:12:48.043218209Z level=info msg="Executing migration" id="Add column email to team table" grafana | logger=migrator t=2025-06-19T23:12:48.047886485Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=4.671406ms grafana | logger=migrator t=2025-06-19T23:12:48.051877604Z level=info msg="Executing migration" id="Add column external to team_member table" grafana | logger=migrator t=2025-06-19T23:12:48.05646166Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=4.583726ms grafana | logger=migrator t=2025-06-19T23:12:48.059543207Z level=info msg="Executing migration" id="Add column permission to team_member table" grafana | logger=migrator t=2025-06-19T23:12:48.064144342Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=4.600605ms grafana | logger=migrator t=2025-06-19T23:12:48.067578974Z level=info msg="Executing migration" id="add unique index team_member_user_id_org_id" grafana | logger=migrator t=2025-06-19T23:12:48.068428594Z level=info msg="Migration successfully executed" id="add unique index team_member_user_id_org_id" duration=849.2µs grafana | logger=migrator t=2025-06-19T23:12:48.071443351Z level=info msg="Executing migration" id="create dashboard acl table" grafana | logger=migrator t=2025-06-19T23:12:48.07222298Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=779.049µs grafana | logger=migrator t=2025-06-19T23:12:48.10837341Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" grafana | logger=migrator t=2025-06-19T23:12:48.109784827Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.410477ms grafana | logger=migrator t=2025-06-19T23:12:48.113259829Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" grafana | logger=migrator t=2025-06-19T23:12:48.114678287Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.418427ms grafana | logger=migrator t=2025-06-19T23:12:48.118008397Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" grafana | logger=migrator t=2025-06-19T23:12:48.118961029Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=951.962µs grafana | logger=migrator t=2025-06-19T23:12:48.123474103Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" grafana | logger=migrator t=2025-06-19T23:12:48.12483842Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=1.364327ms grafana | logger=migrator t=2025-06-19T23:12:48.128295252Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" grafana | logger=migrator t=2025-06-19T23:12:48.129859581Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.563229ms grafana | logger=migrator t=2025-06-19T23:12:48.133564886Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" grafana | logger=migrator t=2025-06-19T23:12:48.135144565Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=1.579689ms grafana | logger=migrator t=2025-06-19T23:12:48.139570449Z level=info msg="Executing migration" id="add index dashboard_permission" grafana | logger=migrator t=2025-06-19T23:12:48.141572684Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=2.004734ms grafana | logger=migrator t=2025-06-19T23:12:48.145200318Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" grafana | logger=migrator t=2025-06-19T23:12:48.145881836Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=681.088µs grafana | logger=migrator t=2025-06-19T23:12:48.150694054Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" grafana | logger=migrator t=2025-06-19T23:12:48.151084569Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=389.775µs grafana | logger=migrator t=2025-06-19T23:12:48.154295258Z level=info msg="Executing migration" id="create tag table" grafana | logger=migrator t=2025-06-19T23:12:48.15524324Z level=info msg="Migration successfully executed" id="create tag table" duration=947.782µs grafana | logger=migrator t=2025-06-19T23:12:48.16021523Z level=info msg="Executing migration" id="add index tag.key_value" grafana | logger=migrator t=2025-06-19T23:12:48.161277943Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.062543ms grafana | logger=migrator t=2025-06-19T23:12:48.164168038Z level=info msg="Executing migration" id="create login attempt table" grafana | logger=migrator t=2025-06-19T23:12:48.165063529Z level=info msg="Migration successfully executed" id="create login attempt table" duration=894.911µs grafana | logger=migrator t=2025-06-19T23:12:48.168236438Z level=info msg="Executing migration" id="add index login_attempt.username" grafana | logger=migrator t=2025-06-19T23:12:48.169366931Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=1.129933ms grafana | logger=migrator t=2025-06-19T23:12:48.174524454Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" grafana | logger=migrator t=2025-06-19T23:12:48.176251355Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=1.726451ms grafana | logger=migrator t=2025-06-19T23:12:48.179723297Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-19T23:12:48.197366752Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=17.643025ms grafana | logger=migrator t=2025-06-19T23:12:48.20131903Z level=info msg="Executing migration" id="create login_attempt v2" grafana | logger=migrator t=2025-06-19T23:12:48.202091269Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=771.299µs grafana | logger=migrator t=2025-06-19T23:12:48.207373494Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" grafana | logger=migrator t=2025-06-19T23:12:48.208403336Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=1.028602ms grafana | logger=migrator t=2025-06-19T23:12:48.211648616Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" grafana | logger=migrator t=2025-06-19T23:12:48.212080701Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=431.535µs grafana | logger=migrator t=2025-06-19T23:12:48.215135648Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" grafana | logger=migrator t=2025-06-19T23:12:48.21611413Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=979.462µs grafana | logger=migrator t=2025-06-19T23:12:48.221130071Z level=info msg="Executing migration" id="create user auth table" grafana | logger=migrator t=2025-06-19T23:12:48.222045172Z level=info msg="Migration successfully executed" id="create user auth table" duration=914.361µs grafana | logger=migrator t=2025-06-19T23:12:48.226905361Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" grafana | logger=migrator t=2025-06-19T23:12:48.227964314Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.058663ms grafana | logger=migrator t=2025-06-19T23:12:48.230968611Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" grafana | logger=migrator t=2025-06-19T23:12:48.231119142Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=151.112µs grafana | logger=migrator t=2025-06-19T23:12:48.235923861Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" grafana | logger=migrator t=2025-06-19T23:12:48.244155521Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=8.23134ms grafana | logger=migrator t=2025-06-19T23:12:48.247425521Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" grafana | logger=migrator t=2025-06-19T23:12:48.252889407Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=5.462936ms grafana | logger=migrator t=2025-06-19T23:12:48.256506891Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" grafana | logger=migrator t=2025-06-19T23:12:48.260418129Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=3.910288ms grafana | logger=migrator t=2025-06-19T23:12:48.265581141Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" grafana | logger=migrator t=2025-06-19T23:12:48.270899276Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=5.317395ms grafana | logger=migrator t=2025-06-19T23:12:48.274697552Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" grafana | logger=migrator t=2025-06-19T23:12:48.276005998Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=1.322606ms grafana | logger=migrator t=2025-06-19T23:12:48.279277348Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" grafana | logger=migrator t=2025-06-19T23:12:48.284678484Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=5.400306ms grafana | logger=migrator t=2025-06-19T23:12:48.288362898Z level=info msg="Executing migration" id="Add user_unique_id to user_auth" grafana | logger=migrator t=2025-06-19T23:12:48.293669043Z level=info msg="Migration successfully executed" id="Add user_unique_id to user_auth" duration=5.305355ms grafana | logger=migrator t=2025-06-19T23:12:48.298786415Z level=info msg="Executing migration" id="create server_lock table" grafana | logger=migrator t=2025-06-19T23:12:48.299727247Z level=info msg="Migration successfully executed" id="create server_lock table" duration=940.332µs grafana | logger=migrator t=2025-06-19T23:12:48.303027157Z level=info msg="Executing migration" id="add index server_lock.operation_uid" grafana | logger=migrator t=2025-06-19T23:12:48.304411194Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=1.365816ms grafana | logger=migrator t=2025-06-19T23:12:48.309937891Z level=info msg="Executing migration" id="create user auth token table" grafana | logger=migrator t=2025-06-19T23:12:48.310976203Z level=info msg="Migration successfully executed" id="create user auth token table" duration=1.038032ms grafana | logger=migrator t=2025-06-19T23:12:48.314390695Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" grafana | logger=migrator t=2025-06-19T23:12:48.315556459Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.164844ms grafana | logger=migrator t=2025-06-19T23:12:48.326014196Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" grafana | logger=migrator t=2025-06-19T23:12:48.328340594Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=2.325708ms grafana | logger=migrator t=2025-06-19T23:12:48.334303187Z level=info msg="Executing migration" id="add index user_auth_token.user_id" grafana | logger=migrator t=2025-06-19T23:12:48.336330402Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=2.022995ms grafana | logger=migrator t=2025-06-19T23:12:48.340061957Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" grafana | logger=migrator t=2025-06-19T23:12:48.345591254Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=5.528247ms grafana | logger=migrator t=2025-06-19T23:12:48.348962755Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" grafana | logger=migrator t=2025-06-19T23:12:48.350043238Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=1.079863ms grafana | logger=migrator t=2025-06-19T23:12:48.355401704Z level=info msg="Executing migration" id="add external_session_id to user_auth_token" grafana | logger=migrator t=2025-06-19T23:12:48.364441603Z level=info msg="Migration successfully executed" id="add external_session_id to user_auth_token" duration=9.03954ms grafana | logger=migrator t=2025-06-19T23:12:48.36741682Z level=info msg="Executing migration" id="create cache_data table" grafana | logger=migrator t=2025-06-19T23:12:48.368126628Z level=info msg="Migration successfully executed" id="create cache_data table" duration=709.028µs grafana | logger=migrator t=2025-06-19T23:12:48.370937242Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" grafana | logger=migrator t=2025-06-19T23:12:48.371875534Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=937.242µs grafana | logger=migrator t=2025-06-19T23:12:48.375293725Z level=info msg="Executing migration" id="create short_url table v1" grafana | logger=migrator t=2025-06-19T23:12:48.376241407Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=946.482µs grafana | logger=migrator t=2025-06-19T23:12:48.38145303Z level=info msg="Executing migration" id="add index short_url.org_id-uid" grafana | logger=migrator t=2025-06-19T23:12:48.382847267Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=1.392727ms grafana | logger=migrator t=2025-06-19T23:12:48.386169678Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" grafana | logger=migrator t=2025-06-19T23:12:48.38634508Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=175.572µs grafana | logger=migrator t=2025-06-19T23:12:48.390728433Z level=info msg="Executing migration" id="delete alert_definition table" grafana | logger=migrator t=2025-06-19T23:12:48.390968396Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=239.803µs grafana | logger=migrator t=2025-06-19T23:12:48.39619285Z level=info msg="Executing migration" id="recreate alert_definition table" grafana | logger=migrator t=2025-06-19T23:12:48.397824399Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.62628ms grafana | logger=migrator t=2025-06-19T23:12:48.401256281Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" grafana | logger=migrator t=2025-06-19T23:12:48.403025043Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.767442ms grafana | logger=migrator t=2025-06-19T23:12:48.406513135Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2025-06-19T23:12:48.408487279Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.973544ms grafana | logger=migrator t=2025-06-19T23:12:48.413904645Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-19T23:12:48.413971076Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=68.581µs grafana | logger=migrator t=2025-06-19T23:12:48.417241545Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" grafana | logger=migrator t=2025-06-19T23:12:48.418368129Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.126314ms grafana | logger=migrator t=2025-06-19T23:12:48.421824051Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2025-06-19T23:12:48.423492191Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.66767ms grafana | logger=migrator t=2025-06-19T23:12:48.428960708Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" grafana | logger=migrator t=2025-06-19T23:12:48.430310254Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.348656ms grafana | logger=migrator t=2025-06-19T23:12:48.433433802Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2025-06-19T23:12:48.434587286Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.152674ms grafana | logger=migrator t=2025-06-19T23:12:48.437862046Z level=info msg="Executing migration" id="Add column paused in alert_definition" grafana | logger=migrator t=2025-06-19T23:12:48.443765178Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=5.902112ms grafana | logger=migrator t=2025-06-19T23:12:48.451718485Z level=info msg="Executing migration" id="drop alert_definition table" grafana | logger=migrator t=2025-06-19T23:12:48.452779418Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=1.060433ms grafana | logger=migrator t=2025-06-19T23:12:48.456215379Z level=info msg="Executing migration" id="delete alert_definition_version table" grafana | logger=migrator t=2025-06-19T23:12:48.456641255Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=425.045µs grafana | logger=migrator t=2025-06-19T23:12:48.462613997Z level=info msg="Executing migration" id="recreate alert_definition_version table" grafana | logger=migrator t=2025-06-19T23:12:48.463980814Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.366727ms grafana | logger=migrator t=2025-06-19T23:12:48.467126802Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" grafana | logger=migrator t=2025-06-19T23:12:48.468300796Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.167234ms grafana | logger=migrator t=2025-06-19T23:12:48.471667607Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" grafana | logger=migrator t=2025-06-19T23:12:48.472796911Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.128884ms grafana | logger=migrator t=2025-06-19T23:12:48.478102115Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-19T23:12:48.478203317Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=101.342µs grafana | logger=migrator t=2025-06-19T23:12:48.481361425Z level=info msg="Executing migration" id="drop alert_definition_version table" grafana | logger=migrator t=2025-06-19T23:12:48.483140957Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.780152ms grafana | logger=migrator t=2025-06-19T23:12:48.487181626Z level=info msg="Executing migration" id="create alert_instance table" grafana | logger=migrator t=2025-06-19T23:12:48.488873416Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.69197ms grafana | logger=migrator t=2025-06-19T23:12:48.493413552Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" grafana | logger=migrator t=2025-06-19T23:12:48.494541375Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.127063ms grafana | logger=migrator t=2025-06-19T23:12:48.498027988Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" grafana | logger=migrator t=2025-06-19T23:12:48.499131831Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.103573ms grafana | logger=migrator t=2025-06-19T23:12:48.503405463Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" grafana | logger=migrator t=2025-06-19T23:12:48.512711956Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=9.306803ms grafana | logger=migrator t=2025-06-19T23:12:48.516039357Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" grafana | logger=migrator t=2025-06-19T23:12:48.516930248Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=890.401µs grafana | logger=migrator t=2025-06-19T23:12:48.520314589Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" grafana | logger=migrator t=2025-06-19T23:12:48.521367142Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=1.051762ms grafana | logger=migrator t=2025-06-19T23:12:48.525788785Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" grafana | logger=migrator t=2025-06-19T23:12:48.551597039Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=25.807854ms grafana | logger=migrator t=2025-06-19T23:12:48.572396092Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" grafana | logger=migrator t=2025-06-19T23:12:48.606562457Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=34.163575ms grafana | logger=migrator t=2025-06-19T23:12:48.611774831Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" grafana | logger=migrator t=2025-06-19T23:12:48.613615593Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=1.842962ms grafana | logger=migrator t=2025-06-19T23:12:48.61908521Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" grafana | logger=migrator t=2025-06-19T23:12:48.61993463Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=916.251µs grafana | logger=migrator t=2025-06-19T23:12:48.623652655Z level=info msg="Executing migration" id="add current_reason column related to current_state" grafana | logger=migrator t=2025-06-19T23:12:48.632508113Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=8.816737ms grafana | logger=migrator t=2025-06-19T23:12:48.636678374Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" grafana | logger=migrator t=2025-06-19T23:12:48.641089597Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=4.409823ms grafana | logger=migrator t=2025-06-19T23:12:48.645915896Z level=info msg="Executing migration" id="create alert_rule table" grafana | logger=migrator t=2025-06-19T23:12:48.646783316Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=867.26µs grafana | logger=migrator t=2025-06-19T23:12:48.650692974Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" grafana | logger=migrator t=2025-06-19T23:12:48.651958909Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.265365ms grafana | logger=migrator t=2025-06-19T23:12:48.656222701Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" grafana | logger=migrator t=2025-06-19T23:12:48.658084024Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.859333ms grafana | logger=migrator t=2025-06-19T23:12:48.663630131Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" grafana | logger=migrator t=2025-06-19T23:12:48.665400113Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.770782ms grafana | logger=migrator t=2025-06-19T23:12:48.668843485Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-19T23:12:48.668979706Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=136.631µs grafana | logger=migrator t=2025-06-19T23:12:48.67255675Z level=info msg="Executing migration" id="add column for to alert_rule" grafana | logger=migrator t=2025-06-19T23:12:48.679288832Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=6.730782ms grafana | logger=migrator t=2025-06-19T23:12:48.68409423Z level=info msg="Executing migration" id="add column annotations to alert_rule" grafana | logger=migrator t=2025-06-19T23:12:48.691727563Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=7.633653ms grafana | logger=migrator t=2025-06-19T23:12:48.695485679Z level=info msg="Executing migration" id="add column labels to alert_rule" grafana | logger=migrator t=2025-06-19T23:12:48.702295291Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=6.809162ms grafana | logger=migrator t=2025-06-19T23:12:48.706684045Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" grafana | logger=migrator t=2025-06-19T23:12:48.70791987Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=1.235275ms grafana | logger=migrator t=2025-06-19T23:12:48.712545226Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" grafana | logger=migrator t=2025-06-19T23:12:48.714151866Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.60637ms grafana | logger=migrator t=2025-06-19T23:12:48.718213125Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" grafana | logger=migrator t=2025-06-19T23:12:48.724953887Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=6.740122ms grafana | logger=migrator t=2025-06-19T23:12:48.728380149Z level=info msg="Executing migration" id="add panel_id column to alert_rule" grafana | logger=migrator t=2025-06-19T23:12:48.733002485Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=4.621496ms grafana | logger=migrator t=2025-06-19T23:12:48.738222508Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" grafana | logger=migrator t=2025-06-19T23:12:48.739671206Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=1.481128ms grafana | logger=migrator t=2025-06-19T23:12:48.743218389Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" grafana | logger=migrator t=2025-06-19T23:12:48.749468985Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=6.249936ms grafana | logger=migrator t=2025-06-19T23:12:48.752993128Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" grafana | logger=migrator t=2025-06-19T23:12:48.757771746Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=4.704737ms grafana | logger=migrator t=2025-06-19T23:12:48.762576004Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" grafana | logger=migrator t=2025-06-19T23:12:48.762720026Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=141.572µs grafana | logger=migrator t=2025-06-19T23:12:48.766376971Z level=info msg="Executing migration" id="create alert_rule_version table" grafana | logger=migrator t=2025-06-19T23:12:48.767573955Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.196374ms grafana | logger=migrator t=2025-06-19T23:12:48.771356241Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" grafana | logger=migrator t=2025-06-19T23:12:48.773200354Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.843643ms grafana | logger=migrator t=2025-06-19T23:12:48.777912691Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" grafana | logger=migrator t=2025-06-19T23:12:48.779072125Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.159544ms grafana | logger=migrator t=2025-06-19T23:12:48.782984293Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-19T23:12:48.783149465Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=165.862µs grafana | logger=migrator t=2025-06-19T23:12:48.820289366Z level=info msg="Executing migration" id="add column for to alert_rule_version" grafana | logger=migrator t=2025-06-19T23:12:48.830660092Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=10.370576ms grafana | logger=migrator t=2025-06-19T23:12:48.834876694Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" grafana | logger=migrator t=2025-06-19T23:12:48.841451323Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=6.573359ms grafana | logger=migrator t=2025-06-19T23:12:48.846328073Z level=info msg="Executing migration" id="add column labels to alert_rule_version" grafana | logger=migrator t=2025-06-19T23:12:48.856979062Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=10.652059ms grafana | logger=migrator t=2025-06-19T23:12:48.860393014Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" grafana | logger=migrator t=2025-06-19T23:12:48.865578637Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=5.184363ms grafana | logger=migrator t=2025-06-19T23:12:48.86913134Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" grafana | logger=migrator t=2025-06-19T23:12:48.876873404Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=7.738924ms grafana | logger=migrator t=2025-06-19T23:12:48.881039155Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" grafana | logger=migrator t=2025-06-19T23:12:48.881177907Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=139.312µs grafana | logger=migrator t=2025-06-19T23:12:48.884861181Z level=info msg="Executing migration" id=create_alert_configuration_table grafana | logger=migrator t=2025-06-19T23:12:48.885796683Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=935.082µs grafana | logger=migrator t=2025-06-19T23:12:48.891403821Z level=info msg="Executing migration" id="Add column default in alert_configuration" grafana | logger=migrator t=2025-06-19T23:12:48.897840829Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=6.436538ms grafana | logger=migrator t=2025-06-19T23:12:48.902334074Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" grafana | logger=migrator t=2025-06-19T23:12:48.902488146Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=155.492µs grafana | logger=migrator t=2025-06-19T23:12:48.905850917Z level=info msg="Executing migration" id="add column org_id in alert_configuration" grafana | logger=migrator t=2025-06-19T23:12:48.912455277Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=6.60379ms grafana | logger=migrator t=2025-06-19T23:12:48.915926149Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" grafana | logger=migrator t=2025-06-19T23:12:48.917097953Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=1.170904ms grafana | logger=migrator t=2025-06-19T23:12:48.921597978Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" grafana | logger=migrator t=2025-06-19T23:12:48.927116995Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=5.519077ms grafana | logger=migrator t=2025-06-19T23:12:48.930532897Z level=info msg="Executing migration" id=create_ngalert_configuration_table grafana | logger=migrator t=2025-06-19T23:12:48.931213145Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=680.178µs grafana | logger=migrator t=2025-06-19T23:12:48.934630066Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" grafana | logger=migrator t=2025-06-19T23:12:48.93578142Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.151134ms grafana | logger=migrator t=2025-06-19T23:12:48.940188054Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" grafana | logger=migrator t=2025-06-19T23:12:48.952135269Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=11.946345ms grafana | logger=migrator t=2025-06-19T23:12:48.955523631Z level=info msg="Executing migration" id="create provenance_type table" grafana | logger=migrator t=2025-06-19T23:12:48.956354021Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=830.03µs grafana | logger=migrator t=2025-06-19T23:12:48.960039465Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" grafana | logger=migrator t=2025-06-19T23:12:48.961323961Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.284506ms grafana | logger=migrator t=2025-06-19T23:12:48.965966377Z level=info msg="Executing migration" id="create alert_image table" grafana | logger=migrator t=2025-06-19T23:12:48.966924419Z level=info msg="Migration successfully executed" id="create alert_image table" duration=957.792µs grafana | logger=migrator t=2025-06-19T23:12:48.970478402Z level=info msg="Executing migration" id="add unique index on token to alert_image table" grafana | logger=migrator t=2025-06-19T23:12:48.971665707Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.186945ms grafana | logger=migrator t=2025-06-19T23:12:48.97519886Z level=info msg="Executing migration" id="support longer URLs in alert_image table" grafana | logger=migrator t=2025-06-19T23:12:48.975324851Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=126.561µs grafana | logger=migrator t=2025-06-19T23:12:48.980068129Z level=info msg="Executing migration" id=create_alert_configuration_history_table grafana | logger=migrator t=2025-06-19T23:12:48.982249605Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=2.181046ms grafana | logger=migrator t=2025-06-19T23:12:48.987227746Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" grafana | logger=migrator t=2025-06-19T23:12:48.98923429Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=2.006024ms grafana | logger=migrator t=2025-06-19T23:12:48.993171368Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" grafana | logger=migrator t=2025-06-19T23:12:48.994092839Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" grafana | logger=migrator t=2025-06-19T23:12:49.001416499Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" grafana | logger=migrator t=2025-06-19T23:12:49.002214378Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=797.729µs grafana | logger=migrator t=2025-06-19T23:12:49.006694301Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" grafana | logger=migrator t=2025-06-19T23:12:49.008410891Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.759651ms grafana | logger=migrator t=2025-06-19T23:12:49.013077196Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" grafana | logger=migrator t=2025-06-19T23:12:49.020427003Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=7.346237ms grafana | logger=migrator t=2025-06-19T23:12:49.023939365Z level=info msg="Executing migration" id="create library_element table v1" grafana | logger=migrator t=2025-06-19T23:12:49.025113569Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=1.173574ms grafana | logger=migrator t=2025-06-19T23:12:49.029756954Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" grafana | logger=migrator t=2025-06-19T23:12:49.031217501Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.457007ms grafana | logger=migrator t=2025-06-19T23:12:49.067071525Z level=info msg="Executing migration" id="create library_element_connection table v1" grafana | logger=migrator t=2025-06-19T23:12:49.069369882Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=2.297847ms grafana | logger=migrator t=2025-06-19T23:12:49.074472973Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" grafana | logger=migrator t=2025-06-19T23:12:49.076489537Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=2.017344ms grafana | logger=migrator t=2025-06-19T23:12:49.079919897Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" grafana | logger=migrator t=2025-06-19T23:12:49.081059311Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.138484ms grafana | logger=migrator t=2025-06-19T23:12:49.084698614Z level=info msg="Executing migration" id="increase max description length to 2048" grafana | logger=migrator t=2025-06-19T23:12:49.084848576Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=149.582µs grafana | logger=migrator t=2025-06-19T23:12:49.089837355Z level=info msg="Executing migration" id="alter library_element model to mediumtext" grafana | logger=migrator t=2025-06-19T23:12:49.089966616Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=69.31µs grafana | logger=migrator t=2025-06-19T23:12:49.09450058Z level=info msg="Executing migration" id="add library_element folder uid" grafana | logger=migrator t=2025-06-19T23:12:49.108737658Z level=info msg="Migration successfully executed" id="add library_element folder uid" duration=14.239328ms grafana | logger=migrator t=2025-06-19T23:12:49.112351351Z level=info msg="Executing migration" id="populate library_element folder_uid" grafana | logger=migrator t=2025-06-19T23:12:49.112986569Z level=info msg="Migration successfully executed" id="populate library_element folder_uid" duration=649.698µs grafana | logger=migrator t=2025-06-19T23:12:49.116541361Z level=info msg="Executing migration" id="add index library_element org_id-folder_uid-name-kind" grafana | logger=migrator t=2025-06-19T23:12:49.117954487Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_uid-name-kind" duration=1.412856ms grafana | logger=migrator t=2025-06-19T23:12:49.124512505Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" grafana | logger=migrator t=2025-06-19T23:12:49.125235764Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=723.219µs grafana | logger=migrator t=2025-06-19T23:12:49.128851116Z level=info msg="Executing migration" id="create data_keys table" grafana | logger=migrator t=2025-06-19T23:12:49.130099321Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.247625ms grafana | logger=migrator t=2025-06-19T23:12:49.133546852Z level=info msg="Executing migration" id="create secrets table" grafana | logger=migrator t=2025-06-19T23:12:49.134394602Z level=info msg="Migration successfully executed" id="create secrets table" duration=847.77µs grafana | logger=migrator t=2025-06-19T23:12:49.13843367Z level=info msg="Executing migration" id="rename data_keys name column to id" grafana | logger=migrator t=2025-06-19T23:12:49.186831443Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=48.390952ms grafana | logger=migrator t=2025-06-19T23:12:49.19170321Z level=info msg="Executing migration" id="add name column into data_keys" grafana | logger=migrator t=2025-06-19T23:12:49.199780816Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=8.076936ms grafana | logger=migrator t=2025-06-19T23:12:49.203692852Z level=info msg="Executing migration" id="copy data_keys id column values into name" grafana | logger=migrator t=2025-06-19T23:12:49.203843564Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=148.642µs grafana | logger=migrator t=2025-06-19T23:12:49.207007391Z level=info msg="Executing migration" id="rename data_keys name column to label" grafana | logger=migrator t=2025-06-19T23:12:49.241163666Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=34.149135ms grafana | logger=migrator t=2025-06-19T23:12:49.245423906Z level=info msg="Executing migration" id="rename data_keys id column back to name" grafana | logger=migrator t=2025-06-19T23:12:49.276685526Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=31.25869ms grafana | logger=migrator t=2025-06-19T23:12:49.284766152Z level=info msg="Executing migration" id="create kv_store table v1" grafana | logger=migrator t=2025-06-19T23:12:49.287038969Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=2.275916ms grafana | logger=migrator t=2025-06-19T23:12:49.296661303Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" grafana | logger=migrator t=2025-06-19T23:12:49.298643196Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.980573ms grafana | logger=migrator t=2025-06-19T23:12:49.303519463Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" grafana | logger=migrator t=2025-06-19T23:12:49.30408026Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=534.507µs grafana | logger=migrator t=2025-06-19T23:12:49.308879647Z level=info msg="Executing migration" id="create permission table" grafana | logger=migrator t=2025-06-19T23:12:49.309782898Z level=info msg="Migration successfully executed" id="create permission table" duration=902.651µs grafana | logger=migrator t=2025-06-19T23:12:49.318990177Z level=info msg="Executing migration" id="add unique index permission.role_id" grafana | logger=migrator t=2025-06-19T23:12:49.32097192Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.984963ms grafana | logger=migrator t=2025-06-19T23:12:49.328155835Z level=info msg="Executing migration" id="add unique index role_id_action_scope" grafana | logger=migrator t=2025-06-19T23:12:49.32937812Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.223115ms grafana | logger=migrator t=2025-06-19T23:12:49.336876038Z level=info msg="Executing migration" id="create role table" grafana | logger=migrator t=2025-06-19T23:12:49.338045862Z level=info msg="Migration successfully executed" id="create role table" duration=1.173034ms grafana | logger=migrator t=2025-06-19T23:12:49.342337253Z level=info msg="Executing migration" id="add column display_name" grafana | logger=migrator t=2025-06-19T23:12:49.350045594Z level=info msg="Migration successfully executed" id="add column display_name" duration=7.710281ms grafana | logger=migrator t=2025-06-19T23:12:49.356174987Z level=info msg="Executing migration" id="add column group_name" grafana | logger=migrator t=2025-06-19T23:12:49.364514866Z level=info msg="Migration successfully executed" id="add column group_name" duration=8.339369ms grafana | logger=migrator t=2025-06-19T23:12:49.368912578Z level=info msg="Executing migration" id="add index role.org_id" grafana | logger=migrator t=2025-06-19T23:12:49.370492046Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=1.582638ms grafana | logger=migrator t=2025-06-19T23:12:49.379895808Z level=info msg="Executing migration" id="add unique index role_org_id_name" grafana | logger=migrator t=2025-06-19T23:12:49.381187513Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.292195ms grafana | logger=migrator t=2025-06-19T23:12:49.386598107Z level=info msg="Executing migration" id="add index role_org_id_uid" grafana | logger=migrator t=2025-06-19T23:12:49.387788591Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.190504ms grafana | logger=migrator t=2025-06-19T23:12:49.391479735Z level=info msg="Executing migration" id="create team role table" grafana | logger=migrator t=2025-06-19T23:12:49.393323387Z level=info msg="Migration successfully executed" id="create team role table" duration=1.846622ms grafana | logger=migrator t=2025-06-19T23:12:49.398083763Z level=info msg="Executing migration" id="add index team_role.org_id" grafana | logger=migrator t=2025-06-19T23:12:49.399170996Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.087343ms grafana | logger=migrator t=2025-06-19T23:12:49.403494757Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" grafana | logger=migrator t=2025-06-19T23:12:49.404649511Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.154034ms grafana | logger=migrator t=2025-06-19T23:12:49.407414163Z level=info msg="Executing migration" id="add index team_role.team_id" grafana | logger=migrator t=2025-06-19T23:12:49.408639758Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.225505ms grafana | logger=migrator t=2025-06-19T23:12:49.412462633Z level=info msg="Executing migration" id="create user role table" grafana | logger=migrator t=2025-06-19T23:12:49.413292953Z level=info msg="Migration successfully executed" id="create user role table" duration=830µs grafana | logger=migrator t=2025-06-19T23:12:49.417649434Z level=info msg="Executing migration" id="add index user_role.org_id" grafana | logger=migrator t=2025-06-19T23:12:49.418707717Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.059603ms grafana | logger=migrator t=2025-06-19T23:12:49.42233986Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" grafana | logger=migrator t=2025-06-19T23:12:49.423391742Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.048762ms grafana | logger=migrator t=2025-06-19T23:12:49.427056756Z level=info msg="Executing migration" id="add index user_role.user_id" grafana | logger=migrator t=2025-06-19T23:12:49.428122538Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.065292ms grafana | logger=migrator t=2025-06-19T23:12:49.433435941Z level=info msg="Executing migration" id="create builtin role table" grafana | logger=migrator t=2025-06-19T23:12:49.434314782Z level=info msg="Migration successfully executed" id="create builtin role table" duration=878.031µs grafana | logger=migrator t=2025-06-19T23:12:49.438112677Z level=info msg="Executing migration" id="add index builtin_role.role_id" grafana | logger=migrator t=2025-06-19T23:12:49.439122649Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.008552ms grafana | logger=migrator t=2025-06-19T23:12:49.442492858Z level=info msg="Executing migration" id="add index builtin_role.name" grafana | logger=migrator t=2025-06-19T23:12:49.44350068Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.006932ms grafana | logger=migrator t=2025-06-19T23:12:49.447151954Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" grafana | logger=migrator t=2025-06-19T23:12:49.455011057Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=7.858324ms grafana | logger=migrator t=2025-06-19T23:12:49.460075727Z level=info msg="Executing migration" id="add index builtin_role.org_id" grafana | logger=migrator t=2025-06-19T23:12:49.461155269Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.078762ms grafana | logger=migrator t=2025-06-19T23:12:49.464979985Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" grafana | logger=migrator t=2025-06-19T23:12:49.466152539Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.174943ms grafana | logger=migrator t=2025-06-19T23:12:49.470186456Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" grafana | logger=migrator t=2025-06-19T23:12:49.471930157Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.742901ms grafana | logger=migrator t=2025-06-19T23:12:49.477401252Z level=info msg="Executing migration" id="add unique index role.uid" grafana | logger=migrator t=2025-06-19T23:12:49.478537365Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.135483ms grafana | logger=migrator t=2025-06-19T23:12:49.481987436Z level=info msg="Executing migration" id="create seed assignment table" grafana | logger=migrator t=2025-06-19T23:12:49.483265021Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=1.277075ms grafana | logger=migrator t=2025-06-19T23:12:49.487496411Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" grafana | logger=migrator t=2025-06-19T23:12:49.489300293Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.803422ms grafana | logger=migrator t=2025-06-19T23:12:49.49497105Z level=info msg="Executing migration" id="add column hidden to role table" grafana | logger=migrator t=2025-06-19T23:12:49.504159508Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=9.187148ms grafana | logger=migrator t=2025-06-19T23:12:49.533087291Z level=info msg="Executing migration" id="permission kind migration" grafana | logger=migrator t=2025-06-19T23:12:49.542548493Z level=info msg="Migration successfully executed" id="permission kind migration" duration=9.461832ms grafana | logger=migrator t=2025-06-19T23:12:49.54654525Z level=info msg="Executing migration" id="permission attribute migration" grafana | logger=migrator t=2025-06-19T23:12:49.554632276Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=8.087126ms grafana | logger=migrator t=2025-06-19T23:12:49.560194562Z level=info msg="Executing migration" id="permission identifier migration" grafana | logger=migrator t=2025-06-19T23:12:49.568427549Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=8.231947ms grafana | logger=migrator t=2025-06-19T23:12:49.571662297Z level=info msg="Executing migration" id="add permission identifier index" grafana | logger=migrator t=2025-06-19T23:12:49.57278453Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.121243ms grafana | logger=migrator t=2025-06-19T23:12:49.576565685Z level=info msg="Executing migration" id="add permission action scope role_id index" grafana | logger=migrator t=2025-06-19T23:12:49.577601498Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.035573ms grafana | logger=migrator t=2025-06-19T23:12:49.584191795Z level=info msg="Executing migration" id="remove permission role_id action scope index" grafana | logger=migrator t=2025-06-19T23:12:49.585196517Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.004692ms grafana | logger=migrator t=2025-06-19T23:12:49.588865281Z level=info msg="Executing migration" id="add group mapping UID column to user_role table" grafana | logger=migrator t=2025-06-19T23:12:49.601834894Z level=info msg="Migration successfully executed" id="add group mapping UID column to user_role table" duration=12.970203ms grafana | logger=migrator t=2025-06-19T23:12:49.607094427Z level=info msg="Executing migration" id="add user_role org ID, user ID, role ID, group mapping UID index" grafana | logger=migrator t=2025-06-19T23:12:49.607927226Z level=info msg="Migration successfully executed" id="add user_role org ID, user ID, role ID, group mapping UID index" duration=832.329µs grafana | logger=migrator t=2025-06-19T23:12:49.612237237Z level=info msg="Executing migration" id="remove user_role org ID, user ID, role ID index" grafana | logger=migrator t=2025-06-19T23:12:49.613209819Z level=info msg="Migration successfully executed" id="remove user_role org ID, user ID, role ID index" duration=972.412µs grafana | logger=migrator t=2025-06-19T23:12:49.61834727Z level=info msg="Executing migration" id="create query_history table v1" grafana | logger=migrator t=2025-06-19T23:12:49.619695126Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=1.350836ms grafana | logger=migrator t=2025-06-19T23:12:49.623998767Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" grafana | logger=migrator t=2025-06-19T23:12:49.625638966Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.638899ms grafana | logger=migrator t=2025-06-19T23:12:49.630022648Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" grafana | logger=migrator t=2025-06-19T23:12:49.630054188Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=27.36µs grafana | logger=migrator t=2025-06-19T23:12:49.634171377Z level=info msg="Executing migration" id="create query_history_details table v1" grafana | logger=migrator t=2025-06-19T23:12:49.634990307Z level=info msg="Migration successfully executed" id="create query_history_details table v1" duration=818.25µs grafana | logger=migrator t=2025-06-19T23:12:49.639662402Z level=info msg="Executing migration" id="rbac disabled migrator" grafana | logger=migrator t=2025-06-19T23:12:49.639721683Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=60.041µs grafana | logger=migrator t=2025-06-19T23:12:49.644046844Z level=info msg="Executing migration" id="teams permissions migration" grafana | logger=migrator t=2025-06-19T23:12:49.644660401Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=613.357µs grafana | logger=migrator t=2025-06-19T23:12:49.648978592Z level=info msg="Executing migration" id="dashboard permissions" grafana | logger=migrator t=2025-06-19T23:12:49.649790202Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=812.47µs grafana | logger=migrator t=2025-06-19T23:12:49.654156603Z level=info msg="Executing migration" id="dashboard permissions uid scopes" grafana | logger=migrator t=2025-06-19T23:12:49.655104185Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=946.882µs grafana | logger=migrator t=2025-06-19T23:12:49.658719457Z level=info msg="Executing migration" id="drop managed folder create actions" grafana | logger=migrator t=2025-06-19T23:12:49.65892009Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=200.143µs grafana | logger=migrator t=2025-06-19T23:12:49.662213119Z level=info msg="Executing migration" id="alerting notification permissions" grafana | logger=migrator t=2025-06-19T23:12:49.662677975Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=464.295µs grafana | logger=migrator t=2025-06-19T23:12:49.667273269Z level=info msg="Executing migration" id="create query_history_star table v1" grafana | logger=migrator t=2025-06-19T23:12:49.668532744Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=1.258755ms grafana | logger=migrator t=2025-06-19T23:12:49.672609082Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" grafana | logger=migrator t=2025-06-19T23:12:49.674311202Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.70139ms grafana | logger=migrator t=2025-06-19T23:12:49.677558001Z level=info msg="Executing migration" id="add column org_id in query_history_star" grafana | logger=migrator t=2025-06-19T23:12:49.685629856Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=8.071475ms grafana | logger=migrator t=2025-06-19T23:12:49.689725965Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" grafana | logger=migrator t=2025-06-19T23:12:49.689742845Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=20.18µs grafana | logger=migrator t=2025-06-19T23:12:49.694536791Z level=info msg="Executing migration" id="create correlation table v1" grafana | logger=migrator t=2025-06-19T23:12:49.695512153Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=974.792µs grafana | logger=migrator t=2025-06-19T23:12:49.699399729Z level=info msg="Executing migration" id="add index correlations.uid" grafana | logger=migrator t=2025-06-19T23:12:49.700472202Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.071253ms grafana | logger=migrator t=2025-06-19T23:12:49.705104477Z level=info msg="Executing migration" id="add index correlations.source_uid" grafana | logger=migrator t=2025-06-19T23:12:49.706820887Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.71603ms grafana | logger=migrator t=2025-06-19T23:12:49.712575885Z level=info msg="Executing migration" id="add correlation config column" grafana | logger=migrator t=2025-06-19T23:12:49.720911714Z level=info msg="Migration successfully executed" id="add correlation config column" duration=8.335319ms grafana | logger=migrator t=2025-06-19T23:12:49.726381438Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" grafana | logger=migrator t=2025-06-19T23:12:49.72739818Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.016992ms grafana | logger=migrator t=2025-06-19T23:12:49.732481541Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" grafana | logger=migrator t=2025-06-19T23:12:49.733499383Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.014592ms grafana | logger=migrator t=2025-06-19T23:12:49.739295171Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-19T23:12:49.760969088Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=21.671207ms grafana | logger=migrator t=2025-06-19T23:12:49.936720018Z level=info msg="Executing migration" id="create correlation v2" grafana | logger=migrator t=2025-06-19T23:12:49.938021733Z level=info msg="Migration successfully executed" id="create correlation v2" duration=1.303685ms grafana | logger=migrator t=2025-06-19T23:12:49.999836935Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" grafana | logger=migrator t=2025-06-19T23:12:50.002519347Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=2.685422ms grafana | logger=migrator t=2025-06-19T23:12:50.009424791Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" grafana | logger=migrator t=2025-06-19T23:12:50.011322264Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.894793ms grafana | logger=migrator t=2025-06-19T23:12:50.0159644Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" grafana | logger=migrator t=2025-06-19T23:12:50.017591789Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.626539ms grafana | logger=migrator t=2025-06-19T23:12:50.022952074Z level=info msg="Executing migration" id="copy correlation v1 to v2" grafana | logger=migrator t=2025-06-19T23:12:50.023198447Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=244.103µs grafana | logger=migrator t=2025-06-19T23:12:50.026515987Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" grafana | logger=migrator t=2025-06-19T23:12:50.027159574Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=642.967µs grafana | logger=migrator t=2025-06-19T23:12:50.030040829Z level=info msg="Executing migration" id="add provisioning column" grafana | logger=migrator t=2025-06-19T23:12:50.036124052Z level=info msg="Migration successfully executed" id="add provisioning column" duration=6.082673ms grafana | logger=migrator t=2025-06-19T23:12:50.038973777Z level=info msg="Executing migration" id="add type column" grafana | logger=migrator t=2025-06-19T23:12:50.044913608Z level=info msg="Migration successfully executed" id="add type column" duration=5.939331ms grafana | logger=migrator t=2025-06-19T23:12:50.051182514Z level=info msg="Executing migration" id="create entity_events table" grafana | logger=migrator t=2025-06-19T23:12:50.052163215Z level=info msg="Migration successfully executed" id="create entity_events table" duration=980.531µs grafana | logger=migrator t=2025-06-19T23:12:50.056602639Z level=info msg="Executing migration" id="create dashboard public config v1" grafana | logger=migrator t=2025-06-19T23:12:50.057574021Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=973.212µs grafana | logger=migrator t=2025-06-19T23:12:50.060383544Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2025-06-19T23:12:50.06083568Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2025-06-19T23:12:50.065173432Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2025-06-19T23:12:50.065602247Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2025-06-19T23:12:50.068419631Z level=info msg="Executing migration" id="Drop old dashboard public config table" grafana | logger=migrator t=2025-06-19T23:12:50.06916272Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=744.499µs grafana | logger=migrator t=2025-06-19T23:12:50.073145328Z level=info msg="Executing migration" id="recreate dashboard public config v1" grafana | logger=migrator t=2025-06-19T23:12:50.07413953Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=994.122µs grafana | logger=migrator t=2025-06-19T23:12:50.077776534Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2025-06-19T23:12:50.078812046Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.035512ms grafana | logger=migrator t=2025-06-19T23:12:50.083184679Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2025-06-19T23:12:50.084334283Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.149414ms grafana | logger=migrator t=2025-06-19T23:12:50.089963151Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" grafana | logger=migrator t=2025-06-19T23:12:50.091696141Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.73266ms grafana | logger=migrator t=2025-06-19T23:12:50.09570057Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" grafana | logger=migrator t=2025-06-19T23:12:50.097069036Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.369616ms grafana | logger=migrator t=2025-06-19T23:12:50.099651617Z level=info msg="Executing migration" id="Drop public config table" grafana | logger=migrator t=2025-06-19T23:12:50.100425967Z level=info msg="Migration successfully executed" id="Drop public config table" duration=776.92µs grafana | logger=migrator t=2025-06-19T23:12:50.104124971Z level=info msg="Executing migration" id="Recreate dashboard public config v2" grafana | logger=migrator t=2025-06-19T23:12:50.105229154Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.103483ms grafana | logger=migrator t=2025-06-19T23:12:50.108654196Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" grafana | logger=migrator t=2025-06-19T23:12:50.1106361Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.972973ms grafana | logger=migrator t=2025-06-19T23:12:50.116139276Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" grafana | logger=migrator t=2025-06-19T23:12:50.117652814Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.513468ms grafana | logger=migrator t=2025-06-19T23:12:50.122890907Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" grafana | logger=migrator t=2025-06-19T23:12:50.12400762Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.116463ms grafana | logger=migrator t=2025-06-19T23:12:50.128504005Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" grafana | logger=migrator t=2025-06-19T23:12:50.152963029Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=24.451894ms grafana | logger=migrator t=2025-06-19T23:12:50.158986792Z level=info msg="Executing migration" id="add annotations_enabled column" grafana | logger=migrator t=2025-06-19T23:12:50.1688293Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=9.842468ms grafana | logger=migrator t=2025-06-19T23:12:50.173848711Z level=info msg="Executing migration" id="add time_selection_enabled column" grafana | logger=migrator t=2025-06-19T23:12:50.183270084Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=9.422413ms grafana | logger=migrator t=2025-06-19T23:12:50.188461587Z level=info msg="Executing migration" id="delete orphaned public dashboards" grafana | logger=migrator t=2025-06-19T23:12:50.18872237Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=261.914µs grafana | logger=migrator t=2025-06-19T23:12:50.192745368Z level=info msg="Executing migration" id="add share column" grafana | logger=migrator t=2025-06-19T23:12:50.201745146Z level=info msg="Migration successfully executed" id="add share column" duration=8.999448ms grafana | logger=migrator t=2025-06-19T23:12:50.207916051Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" grafana | logger=migrator t=2025-06-19T23:12:50.208128403Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=212.522µs grafana | logger=migrator t=2025-06-19T23:12:50.234628482Z level=info msg="Executing migration" id="create file table" grafana | logger=migrator t=2025-06-19T23:12:50.235939498Z level=info msg="Migration successfully executed" id="create file table" duration=1.311916ms grafana | logger=migrator t=2025-06-19T23:12:50.239602162Z level=info msg="Executing migration" id="file table idx: path natural pk" grafana | logger=migrator t=2025-06-19T23:12:50.240764966Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.162544ms grafana | logger=migrator t=2025-06-19T23:12:50.244586232Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" grafana | logger=migrator t=2025-06-19T23:12:50.245807467Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.221575ms grafana | logger=migrator t=2025-06-19T23:12:50.249929707Z level=info msg="Executing migration" id="create file_meta table" grafana | logger=migrator t=2025-06-19T23:12:50.250772907Z level=info msg="Migration successfully executed" id="create file_meta table" duration=842.76µs grafana | logger=migrator t=2025-06-19T23:12:50.254260519Z level=info msg="Executing migration" id="file table idx: path key" grafana | logger=migrator t=2025-06-19T23:12:50.255420233Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.156554ms grafana | logger=migrator t=2025-06-19T23:12:50.259154068Z level=info msg="Executing migration" id="set path collation in file table" grafana | logger=migrator t=2025-06-19T23:12:50.259169968Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=16.54µs grafana | logger=migrator t=2025-06-19T23:12:50.265293312Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" grafana | logger=migrator t=2025-06-19T23:12:50.265336852Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=45.07µs grafana | logger=migrator t=2025-06-19T23:12:50.269374531Z level=info msg="Executing migration" id="managed permissions migration" grafana | logger=migrator t=2025-06-19T23:12:50.269900347Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=526.106µs grafana | logger=migrator t=2025-06-19T23:12:50.275037529Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" grafana | logger=migrator t=2025-06-19T23:12:50.275244591Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=206.992µs grafana | logger=migrator t=2025-06-19T23:12:50.280613866Z level=info msg="Executing migration" id="RBAC action name migrator" grafana | logger=migrator t=2025-06-19T23:12:50.281897941Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.283955ms grafana | logger=migrator t=2025-06-19T23:12:50.286149953Z level=info msg="Executing migration" id="Add UID column to playlist" grafana | logger=migrator t=2025-06-19T23:12:50.295440575Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=9.290102ms grafana | logger=migrator t=2025-06-19T23:12:50.302799393Z level=info msg="Executing migration" id="Update uid column values in playlist" grafana | logger=migrator t=2025-06-19T23:12:50.302956785Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=157.652µs grafana | logger=migrator t=2025-06-19T23:12:50.306000022Z level=info msg="Executing migration" id="Add index for uid in playlist" grafana | logger=migrator t=2025-06-19T23:12:50.306871702Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=871.2µs grafana | logger=migrator t=2025-06-19T23:12:50.310634947Z level=info msg="Executing migration" id="update group index for alert rules" grafana | logger=migrator t=2025-06-19T23:12:50.310952761Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=317.924µs grafana | logger=migrator t=2025-06-19T23:12:50.314329782Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" grafana | logger=migrator t=2025-06-19T23:12:50.314795928Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=466.026µs grafana | logger=migrator t=2025-06-19T23:12:50.318385611Z level=info msg="Executing migration" id="admin only folder/dashboard permission" grafana | logger=migrator t=2025-06-19T23:12:50.319259981Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=874.05µs grafana | logger=migrator t=2025-06-19T23:12:50.323477292Z level=info msg="Executing migration" id="add action column to seed_assignment" grafana | logger=migrator t=2025-06-19T23:12:50.332620152Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=9.14206ms grafana | logger=migrator t=2025-06-19T23:12:50.336777982Z level=info msg="Executing migration" id="add scope column to seed_assignment" grafana | logger=migrator t=2025-06-19T23:12:50.346840293Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=10.061671ms grafana | logger=migrator t=2025-06-19T23:12:50.350397836Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" grafana | logger=migrator t=2025-06-19T23:12:50.351442129Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.042323ms grafana | logger=migrator t=2025-06-19T23:12:50.355024312Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" grafana | logger=migrator t=2025-06-19T23:12:50.434010133Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=78.985361ms grafana | logger=migrator t=2025-06-19T23:12:50.489986407Z level=info msg="Executing migration" id="add unique index builtin_role_name back" grafana | logger=migrator t=2025-06-19T23:12:50.492504838Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=2.518481ms grafana | logger=migrator t=2025-06-19T23:12:50.496231873Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" grafana | logger=migrator t=2025-06-19T23:12:50.497335866Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.103653ms grafana | logger=migrator t=2025-06-19T23:12:50.503152176Z level=info msg="Executing migration" id="add primary key to seed_assigment" grafana | logger=migrator t=2025-06-19T23:12:50.531917302Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=28.765686ms grafana | logger=migrator t=2025-06-19T23:12:50.535546726Z level=info msg="Executing migration" id="add origin column to seed_assignment" grafana | logger=migrator t=2025-06-19T23:12:50.542705832Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=7.158406ms grafana | logger=migrator t=2025-06-19T23:12:50.548053876Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" grafana | logger=migrator t=2025-06-19T23:12:50.548513062Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=458.396µs grafana | logger=migrator t=2025-06-19T23:12:50.557783974Z level=info msg="Executing migration" id="prevent seeding OnCall access" grafana | logger=migrator t=2025-06-19T23:12:50.558072607Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=291.743µs grafana | logger=migrator t=2025-06-19T23:12:50.560433336Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" grafana | logger=migrator t=2025-06-19T23:12:50.560636588Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=208.393µs grafana | logger=migrator t=2025-06-19T23:12:50.562694343Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" grafana | logger=migrator t=2025-06-19T23:12:50.562839515Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=145.231µs grafana | logger=migrator t=2025-06-19T23:12:50.566787582Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" grafana | logger=migrator t=2025-06-19T23:12:50.566951364Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=163.472µs grafana | logger=migrator t=2025-06-19T23:12:50.568544433Z level=info msg="Executing migration" id="create folder table" grafana | logger=migrator t=2025-06-19T23:12:50.569326433Z level=info msg="Migration successfully executed" id="create folder table" duration=781.75µs grafana | logger=migrator t=2025-06-19T23:12:50.571805412Z level=info msg="Executing migration" id="Add index for parent_uid" grafana | logger=migrator t=2025-06-19T23:12:50.572611292Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=803.21µs grafana | logger=migrator t=2025-06-19T23:12:50.575113772Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" grafana | logger=migrator t=2025-06-19T23:12:50.575884532Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=770.29µs grafana | logger=migrator t=2025-06-19T23:12:50.580260024Z level=info msg="Executing migration" id="Update folder title length" grafana | logger=migrator t=2025-06-19T23:12:50.580276344Z level=info msg="Migration successfully executed" id="Update folder title length" duration=16.59µs grafana | logger=migrator t=2025-06-19T23:12:50.582273699Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" grafana | logger=migrator t=2025-06-19T23:12:50.583056268Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=782.069µs grafana | logger=migrator t=2025-06-19T23:12:50.586908565Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" grafana | logger=migrator t=2025-06-19T23:12:50.587683094Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=774.029µs grafana | logger=migrator t=2025-06-19T23:12:50.591405139Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" grafana | logger=migrator t=2025-06-19T23:12:50.592195148Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=789.549µs grafana | logger=migrator t=2025-06-19T23:12:50.594030401Z level=info msg="Executing migration" id="Sync dashboard and folder table" grafana | logger=migrator t=2025-06-19T23:12:50.594330404Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=299.683µs grafana | logger=migrator t=2025-06-19T23:12:50.596790344Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" grafana | logger=migrator t=2025-06-19T23:12:50.596962636Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=172.002µs grafana | logger=migrator t=2025-06-19T23:12:50.602594814Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" grafana | logger=migrator t=2025-06-19T23:12:50.603671677Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=1.076423ms grafana | logger=migrator t=2025-06-19T23:12:50.606101986Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" grafana | logger=migrator t=2025-06-19T23:12:50.606878795Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=776.379µs grafana | logger=migrator t=2025-06-19T23:12:50.610547689Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" grafana | logger=migrator t=2025-06-19T23:12:50.611284978Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=737.009µs grafana | logger=migrator t=2025-06-19T23:12:50.616795515Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" grafana | logger=migrator t=2025-06-19T23:12:50.617578954Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=782.979µs grafana | logger=migrator t=2025-06-19T23:12:50.621101866Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" grafana | logger=migrator t=2025-06-19T23:12:50.621864406Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=760.3µs grafana | logger=migrator t=2025-06-19T23:12:50.624565068Z level=info msg="Executing migration" id="Remove unique index UQE_folder_org_id_parent_uid_title" grafana | logger=migrator t=2025-06-19T23:12:50.625311037Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_org_id_parent_uid_title" duration=745.909µs grafana | logger=migrator t=2025-06-19T23:12:50.630926775Z level=info msg="Executing migration" id="create anon_device table" grafana | logger=migrator t=2025-06-19T23:12:50.631560922Z level=info msg="Migration successfully executed" id="create anon_device table" duration=634.617µs grafana | logger=migrator t=2025-06-19T23:12:50.662658547Z level=info msg="Executing migration" id="add unique index anon_device.device_id" grafana | logger=migrator t=2025-06-19T23:12:50.663420876Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=762.469µs grafana | logger=migrator t=2025-06-19T23:12:50.668153153Z level=info msg="Executing migration" id="add index anon_device.updated_at" grafana | logger=migrator t=2025-06-19T23:12:50.668927492Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=774.039µs grafana | logger=migrator t=2025-06-19T23:12:50.671860638Z level=info msg="Executing migration" id="create signing_key table" grafana | logger=migrator t=2025-06-19T23:12:50.672477895Z level=info msg="Migration successfully executed" id="create signing_key table" duration=616.547µs grafana | logger=migrator t=2025-06-19T23:12:50.675238338Z level=info msg="Executing migration" id="add unique index signing_key.key_id" grafana | logger=migrator t=2025-06-19T23:12:50.676020558Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=781.03µs grafana | logger=migrator t=2025-06-19T23:12:50.680548732Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" grafana | logger=migrator t=2025-06-19T23:12:50.681471643Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=922.571µs grafana | logger=migrator t=2025-06-19T23:12:50.684381798Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" grafana | logger=migrator t=2025-06-19T23:12:50.684594051Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=212.113µs grafana | logger=migrator t=2025-06-19T23:12:50.687480366Z level=info msg="Executing migration" id="Add folder_uid for dashboard" grafana | logger=migrator t=2025-06-19T23:12:50.69441909Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=6.938394ms grafana | logger=migrator t=2025-06-19T23:12:50.697250574Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" grafana | logger=migrator t=2025-06-19T23:12:50.69774865Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=498.606µs grafana | logger=migrator t=2025-06-19T23:12:50.732789522Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" grafana | logger=migrator t=2025-06-19T23:12:50.732801682Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=11.76µs grafana | logger=migrator t=2025-06-19T23:12:50.736404285Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" grafana | logger=migrator t=2025-06-19T23:12:50.737303106Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=898.181µs grafana | logger=migrator t=2025-06-19T23:12:50.74092968Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" grafana | logger=migrator t=2025-06-19T23:12:50.74094273Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=13.21µs grafana | logger=migrator t=2025-06-19T23:12:50.745844859Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" grafana | logger=migrator t=2025-06-19T23:12:50.746696349Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=851.3µs grafana | logger=migrator t=2025-06-19T23:12:50.749670605Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" grafana | logger=migrator t=2025-06-19T23:12:50.750460714Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=790.189µs grafana | logger=migrator t=2025-06-19T23:12:50.753282608Z level=info msg="Executing migration" id="Remove unique index for dashboard_org_id_folder_uid_title_is_folder" grafana | logger=migrator t=2025-06-19T23:12:50.754062538Z level=info msg="Migration successfully executed" id="Remove unique index for dashboard_org_id_folder_uid_title_is_folder" duration=779.6µs grafana | logger=migrator t=2025-06-19T23:12:50.758971477Z level=info msg="Executing migration" id="create sso_setting table" grafana | logger=migrator t=2025-06-19T23:12:50.759720126Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=746.649µs grafana | logger=migrator t=2025-06-19T23:12:50.762866354Z level=info msg="Executing migration" id="copy kvstore migration status to each org" grafana | logger=migrator t=2025-06-19T23:12:50.76337845Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=512.136µs grafana | logger=migrator t=2025-06-19T23:12:50.767067824Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" grafana | logger=migrator t=2025-06-19T23:12:50.767772063Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=708.069µs grafana | logger=migrator t=2025-06-19T23:12:50.771384206Z level=info msg="Executing migration" id="managed dashboard permissions annotation actions migration" grafana | logger=migrator t=2025-06-19T23:12:50.772248967Z level=info msg="Migration successfully executed" id="managed dashboard permissions annotation actions migration" duration=864.341µs grafana | logger=migrator t=2025-06-19T23:12:50.775697338Z level=info msg="Executing migration" id="create cloud_migration table v1" grafana | logger=migrator t=2025-06-19T23:12:50.776811622Z level=info msg="Migration successfully executed" id="create cloud_migration table v1" duration=1.113724ms grafana | logger=migrator t=2025-06-19T23:12:50.781328666Z level=info msg="Executing migration" id="create cloud_migration_run table v1" grafana | logger=migrator t=2025-06-19T23:12:50.782424869Z level=info msg="Migration successfully executed" id="create cloud_migration_run table v1" duration=1.093003ms grafana | logger=migrator t=2025-06-19T23:12:50.785947852Z level=info msg="Executing migration" id="add stack_id column" grafana | logger=migrator t=2025-06-19T23:12:50.79660342Z level=info msg="Migration successfully executed" id="add stack_id column" duration=10.651248ms grafana | logger=migrator t=2025-06-19T23:12:50.801455448Z level=info msg="Executing migration" id="add region_slug column" grafana | logger=migrator t=2025-06-19T23:12:50.808885388Z level=info msg="Migration successfully executed" id="add region_slug column" duration=7.4279ms grafana | logger=migrator t=2025-06-19T23:12:50.814717669Z level=info msg="Executing migration" id="add cluster_slug column" grafana | logger=migrator t=2025-06-19T23:12:50.822081687Z level=info msg="Migration successfully executed" id="add cluster_slug column" duration=7.361039ms grafana | logger=migrator t=2025-06-19T23:12:50.827215419Z level=info msg="Executing migration" id="add migration uid column" grafana | logger=migrator t=2025-06-19T23:12:50.834686219Z level=info msg="Migration successfully executed" id="add migration uid column" duration=7.47366ms grafana | logger=migrator t=2025-06-19T23:12:50.880092276Z level=info msg="Executing migration" id="Update uid column values for migration" grafana | logger=migrator t=2025-06-19T23:12:50.880576061Z level=info msg="Migration successfully executed" id="Update uid column values for migration" duration=486.955µs grafana | logger=migrator t=2025-06-19T23:12:51.010308072Z level=info msg="Executing migration" id="Add unique index migration_uid" grafana | logger=migrator t=2025-06-19T23:12:51.013308027Z level=info msg="Migration successfully executed" id="Add unique index migration_uid" duration=3.000495ms grafana | logger=migrator t=2025-06-19T23:12:51.025493291Z level=info msg="Executing migration" id="add migration run uid column" grafana | logger=migrator t=2025-06-19T23:12:51.032647976Z level=info msg="Migration successfully executed" id="add migration run uid column" duration=7.158025ms grafana | logger=migrator t=2025-06-19T23:12:51.037539543Z level=info msg="Executing migration" id="Update uid column values for migration run" grafana | logger=migrator t=2025-06-19T23:12:51.037752846Z level=info msg="Migration successfully executed" id="Update uid column values for migration run" duration=214.633µs grafana | logger=migrator t=2025-06-19T23:12:51.04067331Z level=info msg="Executing migration" id="Add unique index migration_run_uid" grafana | logger=migrator t=2025-06-19T23:12:51.041828893Z level=info msg="Migration successfully executed" id="Add unique index migration_run_uid" duration=1.152663ms grafana | logger=migrator t=2025-06-19T23:12:51.045178343Z level=info msg="Executing migration" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-19T23:12:51.076429501Z level=info msg="Migration successfully executed" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" duration=31.233928ms grafana | logger=migrator t=2025-06-19T23:12:51.08142335Z level=info msg="Executing migration" id="create cloud_migration_session v2" grafana | logger=migrator t=2025-06-19T23:12:51.08232623Z level=info msg="Migration successfully executed" id="create cloud_migration_session v2" duration=902.99µs grafana | logger=migrator t=2025-06-19T23:12:51.085425097Z level=info msg="Executing migration" id="create index UQE_cloud_migration_session_uid - v2" grafana | logger=migrator t=2025-06-19T23:12:51.086795603Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_session_uid - v2" duration=1.369656ms grafana | logger=migrator t=2025-06-19T23:12:51.090902422Z level=info msg="Executing migration" id="copy cloud_migration_session v1 to v2" grafana | logger=migrator t=2025-06-19T23:12:51.09160652Z level=info msg="Migration successfully executed" id="copy cloud_migration_session v1 to v2" duration=704.158µs grafana | logger=migrator t=2025-06-19T23:12:51.096603789Z level=info msg="Executing migration" id="drop cloud_migration_session_tmp_qwerty" grafana | logger=migrator t=2025-06-19T23:12:51.097678831Z level=info msg="Migration successfully executed" id="drop cloud_migration_session_tmp_qwerty" duration=1.074592ms grafana | logger=migrator t=2025-06-19T23:12:51.101921511Z level=info msg="Executing migration" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-19T23:12:51.127601854Z level=info msg="Migration successfully executed" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" duration=25.672383ms grafana | logger=migrator t=2025-06-19T23:12:51.135477886Z level=info msg="Executing migration" id="create cloud_migration_snapshot v2" grafana | logger=migrator t=2025-06-19T23:12:51.136402047Z level=info msg="Migration successfully executed" id="create cloud_migration_snapshot v2" duration=923.311µs grafana | logger=migrator t=2025-06-19T23:12:51.141884052Z level=info msg="Executing migration" id="create index UQE_cloud_migration_snapshot_uid - v2" grafana | logger=migrator t=2025-06-19T23:12:51.142742022Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_snapshot_uid - v2" duration=858.3µs grafana | logger=migrator t=2025-06-19T23:12:51.148005704Z level=info msg="Executing migration" id="copy cloud_migration_snapshot v1 to v2" grafana | logger=migrator t=2025-06-19T23:12:51.148250367Z level=info msg="Migration successfully executed" id="copy cloud_migration_snapshot v1 to v2" duration=244.603µs grafana | logger=migrator t=2025-06-19T23:12:51.152059002Z level=info msg="Executing migration" id="drop cloud_migration_snapshot_tmp_qwerty" grafana | logger=migrator t=2025-06-19T23:12:51.1528015Z level=info msg="Migration successfully executed" id="drop cloud_migration_snapshot_tmp_qwerty" duration=739.858µs grafana | logger=migrator t=2025-06-19T23:12:51.158566228Z level=info msg="Executing migration" id="add snapshot upload_url column" grafana | logger=migrator t=2025-06-19T23:12:51.165434439Z level=info msg="Migration successfully executed" id="add snapshot upload_url column" duration=6.867861ms grafana | logger=migrator t=2025-06-19T23:12:51.176623571Z level=info msg="Executing migration" id="add snapshot status column" grafana | logger=migrator t=2025-06-19T23:12:51.18419736Z level=info msg="Migration successfully executed" id="add snapshot status column" duration=7.576349ms grafana | logger=migrator t=2025-06-19T23:12:51.187494349Z level=info msg="Executing migration" id="add snapshot local_directory column" grafana | logger=migrator t=2025-06-19T23:12:51.19690245Z level=info msg="Migration successfully executed" id="add snapshot local_directory column" duration=9.406271ms grafana | logger=migrator t=2025-06-19T23:12:51.202335734Z level=info msg="Executing migration" id="add snapshot gms_snapshot_uid column" grafana | logger=migrator t=2025-06-19T23:12:51.213511475Z level=info msg="Migration successfully executed" id="add snapshot gms_snapshot_uid column" duration=11.174361ms grafana | logger=migrator t=2025-06-19T23:12:51.216879795Z level=info msg="Executing migration" id="add snapshot encryption_key column" grafana | logger=migrator t=2025-06-19T23:12:51.223760956Z level=info msg="Migration successfully executed" id="add snapshot encryption_key column" duration=6.878501ms grafana | logger=migrator t=2025-06-19T23:12:51.252572095Z level=info msg="Executing migration" id="add snapshot error_string column" grafana | logger=migrator t=2025-06-19T23:12:51.264571857Z level=info msg="Migration successfully executed" id="add snapshot error_string column" duration=11.998161ms grafana | logger=migrator t=2025-06-19T23:12:51.268131218Z level=info msg="Executing migration" id="create cloud_migration_resource table v1" grafana | logger=migrator t=2025-06-19T23:12:51.268851707Z level=info msg="Migration successfully executed" id="create cloud_migration_resource table v1" duration=720.399µs grafana | logger=migrator t=2025-06-19T23:12:51.272947525Z level=info msg="Executing migration" id="delete cloud_migration_snapshot.result column" grafana | logger=migrator t=2025-06-19T23:12:51.307807606Z level=info msg="Migration successfully executed" id="delete cloud_migration_snapshot.result column" duration=34.858931ms grafana | logger=migrator t=2025-06-19T23:12:51.312218568Z level=info msg="Executing migration" id="add cloud_migration_resource.name column" grafana | logger=migrator t=2025-06-19T23:12:51.319056938Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.name column" duration=6.83864ms grafana | logger=migrator t=2025-06-19T23:12:51.323544311Z level=info msg="Executing migration" id="add cloud_migration_resource.parent_name column" grafana | logger=migrator t=2025-06-19T23:12:51.333099024Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.parent_name column" duration=9.550943ms grafana | logger=migrator t=2025-06-19T23:12:51.368346059Z level=info msg="Executing migration" id="add cloud_migration_session.org_id column" grafana | logger=migrator t=2025-06-19T23:12:51.379129845Z level=info msg="Migration successfully executed" id="add cloud_migration_session.org_id column" duration=10.777406ms grafana | logger=migrator t=2025-06-19T23:12:51.382530656Z level=info msg="Executing migration" id="add cloud_migration_resource.error_code column" grafana | logger=migrator t=2025-06-19T23:12:51.392456533Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.error_code column" duration=9.924907ms grafana | logger=migrator t=2025-06-19T23:12:51.395848153Z level=info msg="Executing migration" id="increase resource_uid column length" grafana | logger=migrator t=2025-06-19T23:12:51.395863573Z level=info msg="Migration successfully executed" id="increase resource_uid column length" duration=16.2µs grafana | logger=migrator t=2025-06-19T23:12:51.401250786Z level=info msg="Executing migration" id="alter kv_store.value to longtext" grafana | logger=migrator t=2025-06-19T23:12:51.401270336Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=20.36µs grafana | logger=migrator t=2025-06-19T23:12:51.404692857Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" grafana | logger=migrator t=2025-06-19T23:12:51.415979989Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=11.288062ms grafana | logger=migrator t=2025-06-19T23:12:51.419594702Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" grafana | logger=migrator t=2025-06-19T23:12:51.431410261Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=11.790629ms grafana | logger=migrator t=2025-06-19T23:12:51.436731484Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" grafana | logger=migrator t=2025-06-19T23:12:51.437066898Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=335.094µs grafana | logger=migrator t=2025-06-19T23:12:51.440227785Z level=info msg="Executing migration" id="managed folder permissions alerting silences actions migration" grafana | logger=migrator t=2025-06-19T23:12:51.440528559Z level=info msg="Migration successfully executed" id="managed folder permissions alerting silences actions migration" duration=299.974µs grafana | logger=migrator t=2025-06-19T23:12:51.44400446Z level=info msg="Executing migration" id="add record column to alert_rule table" grafana | logger=migrator t=2025-06-19T23:12:51.4542634Z level=info msg="Migration successfully executed" id="add record column to alert_rule table" duration=10.2585ms grafana | logger=migrator t=2025-06-19T23:12:51.45762938Z level=info msg="Executing migration" id="add record column to alert_rule_version table" grafana | logger=migrator t=2025-06-19T23:12:51.467571327Z level=info msg="Migration successfully executed" id="add record column to alert_rule_version table" duration=9.941047ms grafana | logger=migrator t=2025-06-19T23:12:51.484374215Z level=info msg="Executing migration" id="add resolved_at column to alert_instance table" grafana | logger=migrator t=2025-06-19T23:12:51.49668707Z level=info msg="Migration successfully executed" id="add resolved_at column to alert_instance table" duration=12.316765ms grafana | logger=migrator t=2025-06-19T23:12:51.502719501Z level=info msg="Executing migration" id="add last_sent_at column to alert_instance table" grafana | logger=migrator t=2025-06-19T23:12:51.512214543Z level=info msg="Migration successfully executed" id="add last_sent_at column to alert_instance table" duration=9.494462ms grafana | logger=migrator t=2025-06-19T23:12:51.533810277Z level=info msg="Executing migration" id="Add scope to alert.notifications.receivers:read and alert.notifications.receivers.secrets:read" grafana | logger=migrator t=2025-06-19T23:12:51.534268973Z level=info msg="Migration successfully executed" id="Add scope to alert.notifications.receivers:read and alert.notifications.receivers.secrets:read" duration=459.256µs grafana | logger=migrator t=2025-06-19T23:12:51.53743672Z level=info msg="Executing migration" id="add metadata column to alert_rule table" grafana | logger=migrator t=2025-06-19T23:12:51.545562915Z level=info msg="Migration successfully executed" id="add metadata column to alert_rule table" duration=8.125465ms grafana | logger=migrator t=2025-06-19T23:12:51.550271401Z level=info msg="Executing migration" id="add metadata column to alert_rule_version table" grafana | logger=migrator t=2025-06-19T23:12:51.559960095Z level=info msg="Migration successfully executed" id="add metadata column to alert_rule_version table" duration=9.688124ms grafana | logger=migrator t=2025-06-19T23:12:51.563134173Z level=info msg="Executing migration" id="delete orphaned service account permissions" grafana | logger=migrator t=2025-06-19T23:12:51.563446206Z level=info msg="Migration successfully executed" id="delete orphaned service account permissions" duration=311.973µs grafana | logger=migrator t=2025-06-19T23:12:51.566799696Z level=info msg="Executing migration" id="adding action set permissions" grafana | logger=migrator t=2025-06-19T23:12:51.567287831Z level=info msg="Migration successfully executed" id="adding action set permissions" duration=488.105µs grafana | logger=migrator t=2025-06-19T23:12:51.570647351Z level=info msg="Executing migration" id="create user_external_session table" grafana | logger=migrator t=2025-06-19T23:12:51.571700553Z level=info msg="Migration successfully executed" id="create user_external_session table" duration=1.051072ms grafana | logger=migrator t=2025-06-19T23:12:51.576452639Z level=info msg="Executing migration" id="increase name_id column length to 1024" grafana | logger=migrator t=2025-06-19T23:12:51.57647955Z level=info msg="Migration successfully executed" id="increase name_id column length to 1024" duration=27.431µs grafana | logger=migrator t=2025-06-19T23:12:51.579632687Z level=info msg="Executing migration" id="increase session_id column length to 1024" grafana | logger=migrator t=2025-06-19T23:12:51.579657807Z level=info msg="Migration successfully executed" id="increase session_id column length to 1024" duration=25.73µs grafana | logger=migrator t=2025-06-19T23:12:51.582197217Z level=info msg="Executing migration" id="remove scope from alert.notifications.receivers:create" grafana | logger=migrator t=2025-06-19T23:12:51.582727993Z level=info msg="Migration successfully executed" id="remove scope from alert.notifications.receivers:create" duration=530.416µs grafana | logger=migrator t=2025-06-19T23:12:51.588790215Z level=info msg="Executing migration" id="add created_by column to alert_rule_version table" grafana | logger=migrator t=2025-06-19T23:12:51.600117118Z level=info msg="Migration successfully executed" id="add created_by column to alert_rule_version table" duration=11.330863ms grafana | logger=migrator t=2025-06-19T23:12:51.603746651Z level=info msg="Executing migration" id="add updated_by column to alert_rule table" grafana | logger=migrator t=2025-06-19T23:12:51.610600602Z level=info msg="Migration successfully executed" id="add updated_by column to alert_rule table" duration=6.85351ms grafana | logger=migrator t=2025-06-19T23:12:51.618594816Z level=info msg="Executing migration" id="add alert_rule_state table" grafana | logger=migrator t=2025-06-19T23:12:51.620322306Z level=info msg="Migration successfully executed" id="add alert_rule_state table" duration=1.72648ms grafana | logger=migrator t=2025-06-19T23:12:51.627619462Z level=info msg="Executing migration" id="add index to alert_rule_state on org_id and rule_uid columns" grafana | logger=migrator t=2025-06-19T23:12:51.629801908Z level=info msg="Migration successfully executed" id="add index to alert_rule_state on org_id and rule_uid columns" duration=2.181736ms grafana | logger=migrator t=2025-06-19T23:12:51.678104536Z level=info msg="Executing migration" id="add guid column to alert_rule table" grafana | logger=migrator t=2025-06-19T23:12:51.686362233Z level=info msg="Migration successfully executed" id="add guid column to alert_rule table" duration=8.260327ms grafana | logger=migrator t=2025-06-19T23:12:51.722879663Z level=info msg="Executing migration" id="add rule_guid column to alert_rule_version table" grafana | logger=migrator t=2025-06-19T23:12:51.735223998Z level=info msg="Migration successfully executed" id="add rule_guid column to alert_rule_version table" duration=12.344535ms grafana | logger=migrator t=2025-06-19T23:12:51.760486916Z level=info msg="Executing migration" id="cleanup alert_rule_version table" grafana | logger=migrator t=2025-06-19T23:12:51.760514676Z level=info msg="Rule version record limit is not set, fallback to 100" limit=0 grafana | logger=migrator t=2025-06-19T23:12:51.762841063Z level=info msg="Cleaning up table `alert_rule_version`" batchSize=50 batches=0 keepVersions=100 grafana | logger=migrator t=2025-06-19T23:12:51.762950595Z level=info msg="Migration successfully executed" id="cleanup alert_rule_version table" duration=2.46066ms grafana | logger=migrator t=2025-06-19T23:12:51.766609718Z level=info msg="Executing migration" id="populate rule guid in alert rule table" grafana | logger=migrator t=2025-06-19T23:12:51.767811382Z level=info msg="Migration successfully executed" id="populate rule guid in alert rule table" duration=1.203504ms grafana | logger=migrator t=2025-06-19T23:12:51.79739027Z level=info msg="Executing migration" id="drop index in alert_rule_version table on rule_org_id, rule_uid and version columns" grafana | logger=migrator t=2025-06-19T23:12:51.800421346Z level=info msg="Migration successfully executed" id="drop index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=3.033566ms grafana | logger=migrator t=2025-06-19T23:12:51.870200017Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid, rule_guid and version columns" grafana | logger=migrator t=2025-06-19T23:12:51.872293622Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid, rule_guid and version columns" duration=2.092675ms grafana | logger=migrator t=2025-06-19T23:12:51.924569198Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_guid and version columns" grafana | logger=migrator t=2025-06-19T23:12:51.926764654Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_guid and version columns" duration=2.205176ms grafana | logger=migrator t=2025-06-19T23:12:52.001312741Z level=info msg="Executing migration" id="add index in alert_rule table on guid columns" grafana | logger=migrator t=2025-06-19T23:12:52.003064172Z level=info msg="Migration successfully executed" id="add index in alert_rule table on guid columns" duration=1.747151ms grafana | logger=migrator t=2025-06-19T23:12:52.026526186Z level=info msg="Executing migration" id="add keep_firing_for column to alert_rule" grafana | logger=migrator t=2025-06-19T23:12:52.041348508Z level=info msg="Migration successfully executed" id="add keep_firing_for column to alert_rule" duration=14.825162ms grafana | logger=migrator t=2025-06-19T23:12:52.04663439Z level=info msg="Executing migration" id="add keep_firing_for column to alert_rule_version" grafana | logger=migrator t=2025-06-19T23:12:52.058159984Z level=info msg="Migration successfully executed" id="add keep_firing_for column to alert_rule_version" duration=11.529514ms grafana | logger=migrator t=2025-06-19T23:12:52.061083688Z level=info msg="Executing migration" id="add missing_series_evals_to_resolve column to alert_rule" grafana | logger=migrator t=2025-06-19T23:12:52.074491644Z level=info msg="Migration successfully executed" id="add missing_series_evals_to_resolve column to alert_rule" duration=13.405836ms grafana | logger=migrator t=2025-06-19T23:12:52.077196156Z level=info msg="Executing migration" id="add missing_series_evals_to_resolve column to alert_rule_version" grafana | logger=migrator t=2025-06-19T23:12:52.086646516Z level=info msg="Migration successfully executed" id="add missing_series_evals_to_resolve column to alert_rule_version" duration=9.44949ms grafana | logger=migrator t=2025-06-19T23:12:52.089612691Z level=info msg="Executing migration" id="remove the datasources:drilldown action" grafana | logger=migrator t=2025-06-19T23:12:52.089867004Z level=info msg="Removed 0 datasources:drilldown permissions" grafana | logger=migrator t=2025-06-19T23:12:52.089877144Z level=info msg="Migration successfully executed" id="remove the datasources:drilldown action" duration=264.833µs grafana | logger=migrator t=2025-06-19T23:12:52.09644187Z level=info msg="Executing migration" id="remove title in folder unique index" grafana | logger=migrator t=2025-06-19T23:12:52.098052939Z level=info msg="Migration successfully executed" id="remove title in folder unique index" duration=1.615509ms grafana | logger=migrator t=2025-06-19T23:12:52.100876612Z level=info msg="migrations completed" performed=654 skipped=0 duration=5.686716894s grafana | logger=migrator t=2025-06-19T23:12:52.10160538Z level=info msg="Unlocking database" grafana | logger=sqlstore t=2025-06-19T23:12:52.11707045Z level=info msg="Created default admin" user=admin grafana | logger=sqlstore t=2025-06-19T23:12:52.117429485Z level=info msg="Created default organization" grafana | logger=secrets t=2025-06-19T23:12:52.12217442Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 grafana | logger=plugin.angulardetectorsprovider.dynamic t=2025-06-19T23:12:52.229929325Z level=info msg="Restored cache from database" duration=609.097µs grafana | logger=resource-migrator t=2025-06-19T23:12:52.240284235Z level=info msg="Locking database" grafana | logger=resource-migrator t=2025-06-19T23:12:52.240299815Z level=info msg="Starting DB migrations" grafana | logger=resource-migrator t=2025-06-19T23:12:52.247681261Z level=info msg="Executing migration" id="create resource_migration_log table" grafana | logger=resource-migrator t=2025-06-19T23:12:52.248477491Z level=info msg="Migration successfully executed" id="create resource_migration_log table" duration=793.519µs grafana | logger=resource-migrator t=2025-06-19T23:12:52.253811613Z level=info msg="Executing migration" id="Initialize resource tables" grafana | logger=resource-migrator t=2025-06-19T23:12:52.253829123Z level=info msg="Migration successfully executed" id="Initialize resource tables" duration=18.23µs grafana | logger=resource-migrator t=2025-06-19T23:12:52.257126431Z level=info msg="Executing migration" id="drop table resource" grafana | logger=resource-migrator t=2025-06-19T23:12:52.257208842Z level=info msg="Migration successfully executed" id="drop table resource" duration=82.751µs grafana | logger=resource-migrator t=2025-06-19T23:12:52.261000277Z level=info msg="Executing migration" id="create table resource" grafana | logger=resource-migrator t=2025-06-19T23:12:52.262212311Z level=info msg="Migration successfully executed" id="create table resource" duration=1.211824ms grafana | logger=resource-migrator t=2025-06-19T23:12:52.267756985Z level=info msg="Executing migration" id="create table resource, index: 0" grafana | logger=resource-migrator t=2025-06-19T23:12:52.269220952Z level=info msg="Migration successfully executed" id="create table resource, index: 0" duration=1.463807ms grafana | logger=resource-migrator t=2025-06-19T23:12:52.272255928Z level=info msg="Executing migration" id="drop table resource_history" grafana | logger=resource-migrator t=2025-06-19T23:12:52.272342239Z level=info msg="Migration successfully executed" id="drop table resource_history" duration=87.041µs grafana | logger=resource-migrator t=2025-06-19T23:12:52.275471015Z level=info msg="Executing migration" id="create table resource_history" grafana | logger=resource-migrator t=2025-06-19T23:12:52.276594508Z level=info msg="Migration successfully executed" id="create table resource_history" duration=1.123493ms grafana | logger=resource-migrator t=2025-06-19T23:12:52.281374894Z level=info msg="Executing migration" id="create table resource_history, index: 0" grafana | logger=resource-migrator t=2025-06-19T23:12:52.28275684Z level=info msg="Migration successfully executed" id="create table resource_history, index: 0" duration=1.381966ms grafana | logger=resource-migrator t=2025-06-19T23:12:52.285843986Z level=info msg="Executing migration" id="create table resource_history, index: 1" grafana | logger=resource-migrator t=2025-06-19T23:12:52.28707953Z level=info msg="Migration successfully executed" id="create table resource_history, index: 1" duration=1.235544ms grafana | logger=resource-migrator t=2025-06-19T23:12:52.290108465Z level=info msg="Executing migration" id="drop table resource_version" grafana | logger=resource-migrator t=2025-06-19T23:12:52.290226997Z level=info msg="Migration successfully executed" id="drop table resource_version" duration=119.092µs grafana | logger=resource-migrator t=2025-06-19T23:12:52.294960822Z level=info msg="Executing migration" id="create table resource_version" grafana | logger=resource-migrator t=2025-06-19T23:12:52.295886173Z level=info msg="Migration successfully executed" id="create table resource_version" duration=925.111µs grafana | logger=resource-migrator t=2025-06-19T23:12:52.298849437Z level=info msg="Executing migration" id="create table resource_version, index: 0" grafana | logger=resource-migrator t=2025-06-19T23:12:52.300095532Z level=info msg="Migration successfully executed" id="create table resource_version, index: 0" duration=1.245195ms grafana | logger=resource-migrator t=2025-06-19T23:12:52.303011316Z level=info msg="Executing migration" id="drop table resource_blob" grafana | logger=resource-migrator t=2025-06-19T23:12:52.303100837Z level=info msg="Migration successfully executed" id="drop table resource_blob" duration=90.141µs grafana | logger=resource-migrator t=2025-06-19T23:12:52.30599792Z level=info msg="Executing migration" id="create table resource_blob" grafana | logger=resource-migrator t=2025-06-19T23:12:52.307211515Z level=info msg="Migration successfully executed" id="create table resource_blob" duration=1.210435ms grafana | logger=resource-migrator t=2025-06-19T23:12:52.311656697Z level=info msg="Executing migration" id="create table resource_blob, index: 0" grafana | logger=resource-migrator t=2025-06-19T23:12:52.312907461Z level=info msg="Migration successfully executed" id="create table resource_blob, index: 0" duration=1.251154ms grafana | logger=resource-migrator t=2025-06-19T23:12:52.31714064Z level=info msg="Executing migration" id="create table resource_blob, index: 1" grafana | logger=resource-migrator t=2025-06-19T23:12:52.318385765Z level=info msg="Migration successfully executed" id="create table resource_blob, index: 1" duration=1.242735ms grafana | logger=resource-migrator t=2025-06-19T23:12:52.322806916Z level=info msg="Executing migration" id="Add column previous_resource_version in resource_history" grafana | logger=resource-migrator t=2025-06-19T23:12:52.333813684Z level=info msg="Migration successfully executed" id="Add column previous_resource_version in resource_history" duration=11.002628ms grafana | logger=resource-migrator t=2025-06-19T23:12:52.336904341Z level=info msg="Executing migration" id="Add column previous_resource_version in resource" grafana | logger=resource-migrator t=2025-06-19T23:12:52.347113689Z level=info msg="Migration successfully executed" id="Add column previous_resource_version in resource" duration=10.208898ms grafana | logger=resource-migrator t=2025-06-19T23:12:52.350150685Z level=info msg="Executing migration" id="Add index to resource_history for polling" grafana | logger=resource-migrator t=2025-06-19T23:12:52.351026805Z level=info msg="Migration successfully executed" id="Add index to resource_history for polling" duration=875.68µs grafana | logger=resource-migrator t=2025-06-19T23:12:52.355695429Z level=info msg="Executing migration" id="Add index to resource for loading" grafana | logger=resource-migrator t=2025-06-19T23:12:52.356553269Z level=info msg="Migration successfully executed" id="Add index to resource for loading" duration=857.6µs grafana | logger=resource-migrator t=2025-06-19T23:12:52.359561394Z level=info msg="Executing migration" id="Add column folder in resource_history" grafana | logger=resource-migrator t=2025-06-19T23:12:52.370058016Z level=info msg="Migration successfully executed" id="Add column folder in resource_history" duration=10.496332ms grafana | logger=resource-migrator t=2025-06-19T23:12:52.373250294Z level=info msg="Executing migration" id="Add column folder in resource" grafana | logger=resource-migrator t=2025-06-19T23:12:52.384186841Z level=info msg="Migration successfully executed" id="Add column folder in resource" duration=10.922567ms grafana | logger=resource-migrator t=2025-06-19T23:12:52.387458409Z level=info msg="Executing migration" id="Migrate DeletionMarkers to real Resource objects" grafana | logger=deletion-marker-migrator t=2025-06-19T23:12:52.38748116Z level=info msg="finding any deletion markers" grafana | logger=resource-migrator t=2025-06-19T23:12:52.387833194Z level=info msg="Migration successfully executed" id="Migrate DeletionMarkers to real Resource objects" duration=374.085µs grafana | logger=resource-migrator t=2025-06-19T23:12:52.392260605Z level=info msg="Executing migration" id="Add index to resource_history for get trash" grafana | logger=resource-migrator t=2025-06-19T23:12:52.393252617Z level=info msg="Migration successfully executed" id="Add index to resource_history for get trash" duration=991.622µs grafana | logger=resource-migrator t=2025-06-19T23:12:52.410185464Z level=info msg="Executing migration" id="Add generation to resource history" grafana | logger=resource-migrator t=2025-06-19T23:12:52.423948744Z level=info msg="Migration successfully executed" id="Add generation to resource history" duration=13.76388ms grafana | logger=resource-migrator t=2025-06-19T23:12:52.426558655Z level=info msg="Executing migration" id="Add generation index to resource history" grafana | logger=resource-migrator t=2025-06-19T23:12:52.427488425Z level=info msg="Migration successfully executed" id="Add generation index to resource history" duration=927.11µs grafana | logger=resource-migrator t=2025-06-19T23:12:52.431785775Z level=info msg="migrations completed" performed=26 skipped=0 duration=184.170304ms grafana | logger=resource-migrator t=2025-06-19T23:12:52.432393352Z level=info msg="Unlocking database" grafana | t=2025-06-19T23:12:52.432623225Z level=info caller=logger.go:214 time=2025-06-19T23:12:52.432600835Z msg="Using channel notifier" logger=sql-resource-server grafana | logger=plugin.store t=2025-06-19T23:12:52.443767625Z level=info msg="Loading plugins..." grafana | logger=plugins.registration t=2025-06-19T23:12:52.47683078Z level=error msg="Could not register plugin" pluginId=table error="plugin table is already registered" grafana | logger=plugins.initialization t=2025-06-19T23:12:52.476893121Z level=error msg="Could not initialize plugin" pluginId=table error="plugin table is already registered" grafana | logger=plugin.store t=2025-06-19T23:12:52.476948651Z level=info msg="Plugins loaded" count=53 duration=33.182136ms grafana | logger=query_data t=2025-06-19T23:12:52.482014861Z level=info msg="Query Service initialization" grafana | logger=live.push_http t=2025-06-19T23:12:52.490283867Z level=info msg="Live Push Gateway initialization" grafana | logger=ngalert.notifier.alertmanager org=1 t=2025-06-19T23:12:52.510240769Z level=info msg="Applying new configuration to Alertmanager" configHash=d2c56faca6af2a5772ff4253222f7386 grafana | logger=ngalert t=2025-06-19T23:12:52.518556146Z level=info msg="Using simple database alert instance store" grafana | logger=ngalert.state.manager.persist t=2025-06-19T23:12:52.518579906Z level=info msg="Using sync state persister" grafana | logger=infra.usagestats.collector t=2025-06-19T23:12:52.52150205Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 grafana | logger=grafanaStorageLogger t=2025-06-19T23:12:52.521847704Z level=info msg="Storage starting" grafana | logger=plugin.backgroundinstaller t=2025-06-19T23:12:52.524238452Z level=info msg="Installing plugin" pluginId=grafana-metricsdrilldown-app version= grafana | logger=ngalert.state.manager t=2025-06-19T23:12:52.524283033Z level=info msg="Warming state cache for startup" grafana | logger=ngalert.multiorg.alertmanager t=2025-06-19T23:12:52.524758098Z level=info msg="Starting MultiOrg Alertmanager" grafana | logger=http.server t=2025-06-19T23:12:52.527317648Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= grafana | logger=plugins.update.checker t=2025-06-19T23:12:52.629264506Z level=info msg="Update check succeeded" duration=106.675163ms grafana | logger=grafana.update.checker t=2025-06-19T23:12:52.629550229Z level=info msg="Update check succeeded" duration=106.179957ms grafana | logger=sqlstore.transactions t=2025-06-19T23:12:52.635256005Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 grafana | logger=plugin.angulardetectorsprovider.dynamic t=2025-06-19T23:12:52.644126708Z level=info msg="Patterns update finished" duration=121.641026ms grafana | logger=ngalert.state.manager t=2025-06-19T23:12:52.661733203Z level=info msg="State cache has been initialized" states=0 duration=137.44972ms grafana | logger=ngalert.scheduler t=2025-06-19T23:12:52.661768864Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=3 grafana | logger=ticker t=2025-06-19T23:12:52.662155268Z level=info msg=starting first_tick=2025-06-19T23:13:00Z grafana | logger=provisioning.datasources t=2025-06-19T23:12:52.681353752Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz grafana | logger=provisioning.alerting t=2025-06-19T23:12:52.698831625Z level=info msg="starting to provision alerting" grafana | logger=provisioning.alerting t=2025-06-19T23:12:52.698849546Z level=info msg="finished to provision alerting" grafana | logger=provisioning.dashboard t=2025-06-19T23:12:52.699921508Z level=info msg="starting to provision dashboards" grafana | logger=grafana-apiserver t=2025-06-19T23:12:52.796550894Z level=info msg="Adding GroupVersion folder.grafana.app v1beta1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-19T23:12:52.798458286Z level=info msg="Adding GroupVersion iam.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-19T23:12:52.803335862Z level=info msg="Adding GroupVersion notifications.alerting.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-19T23:12:52.809218421Z level=info msg="Adding GroupVersion userstorage.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-19T23:12:52.812483389Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-19T23:12:52.817455887Z level=info msg="Adding GroupVersion dashboard.grafana.app v1beta1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-19T23:12:52.818173065Z level=info msg="Adding GroupVersion dashboard.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-19T23:12:52.818743202Z level=info msg="Adding GroupVersion dashboard.grafana.app v2alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-19T23:12:52.819290548Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" grafana | logger=app-registry t=2025-06-19T23:12:52.870531445Z level=info msg="app registry initialized" grafana | logger=provisioning.dashboard t=2025-06-19T23:12:53.340651007Z level=info msg="finished to provision dashboards" grafana | logger=plugin.installer t=2025-06-19T23:12:53.418809148Z level=info msg="Installing plugin" pluginId=grafana-metricsdrilldown-app version= grafana | logger=installer.fs t=2025-06-19T23:12:53.476491008Z level=info msg="Downloaded and extracted grafana-metricsdrilldown-app v1.0.3 zip successfully to /var/lib/grafana/plugins/grafana-metricsdrilldown-app" grafana | logger=plugins.registration t=2025-06-19T23:12:53.502445364Z level=info msg="Plugin registered" pluginId=grafana-metricsdrilldown-app grafana | logger=plugin.backgroundinstaller t=2025-06-19T23:12:53.502468304Z level=info msg="Plugin successfully installed" pluginId=grafana-metricsdrilldown-app version= duration=978.180581ms grafana | logger=plugin.backgroundinstaller t=2025-06-19T23:12:53.502494004Z level=info msg="Installing plugin" pluginId=grafana-lokiexplore-app version= grafana | logger=plugin.installer t=2025-06-19T23:12:53.910964278Z level=info msg="Installing plugin" pluginId=grafana-lokiexplore-app version= grafana | logger=installer.fs t=2025-06-19T23:12:54.036406075Z level=info msg="Downloaded and extracted grafana-lokiexplore-app v1.0.18 zip successfully to /var/lib/grafana/plugins/grafana-lokiexplore-app" grafana | logger=plugins.registration t=2025-06-19T23:12:54.060794692Z level=info msg="Plugin registered" pluginId=grafana-lokiexplore-app grafana | logger=plugin.backgroundinstaller t=2025-06-19T23:12:54.060816252Z level=info msg="Plugin successfully installed" pluginId=grafana-lokiexplore-app version= duration=558.316888ms grafana | logger=plugin.backgroundinstaller t=2025-06-19T23:12:54.060846442Z level=info msg="Installing plugin" pluginId=grafana-pyroscope-app version= grafana | logger=plugin.installer t=2025-06-19T23:12:54.353766172Z level=info msg="Installing plugin" pluginId=grafana-pyroscope-app version= grafana | logger=installer.fs t=2025-06-19T23:12:54.410474828Z level=info msg="Downloaded and extracted grafana-pyroscope-app v1.4.1 zip successfully to /var/lib/grafana/plugins/grafana-pyroscope-app" grafana | logger=plugins.registration t=2025-06-19T23:12:54.42600956Z level=info msg="Plugin registered" pluginId=grafana-pyroscope-app grafana | logger=plugin.backgroundinstaller t=2025-06-19T23:12:54.42603005Z level=info msg="Plugin successfully installed" pluginId=grafana-pyroscope-app version= duration=365.173758ms grafana | logger=plugin.backgroundinstaller t=2025-06-19T23:12:54.426057361Z level=info msg="Installing plugin" pluginId=grafana-exploretraces-app version= grafana | logger=plugin.installer t=2025-06-19T23:12:54.771975263Z level=info msg="Installing plugin" pluginId=grafana-exploretraces-app version= grafana | logger=installer.fs t=2025-06-19T23:12:54.828415286Z level=info msg="Downloaded and extracted grafana-exploretraces-app v1.0.0 zip successfully to /var/lib/grafana/plugins/grafana-exploretraces-app" grafana | logger=plugins.registration t=2025-06-19T23:12:54.844604676Z level=info msg="Plugin registered" pluginId=grafana-exploretraces-app grafana | logger=plugin.backgroundinstaller t=2025-06-19T23:12:54.844629376Z level=info msg="Plugin successfully installed" pluginId=grafana-exploretraces-app version= duration=418.566475ms grafana | logger=infra.usagestats t=2025-06-19T23:14:23.532944874Z level=info msg="Usage stats are ready to report" kafka | ===> User kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) kafka | ===> Configuring ... kafka | Running in Zookeeper mode... kafka | ===> Running preflight checks ... kafka | ===> Check if /var/lib/kafka/data is writable ... kafka | ===> Check if Zookeeper is healthy ... kafka | [2025-06-19 23:12:51,433] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 23:12:51,433] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 23:12:51,433] INFO Client environment:java.version=11.0.26 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 23:12:51,433] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 23:12:51,433] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 23:12:51,433] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/kafka-storage-api-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/kafka-server-common-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/kafka-clients-7.4.9-ccs.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/kafka-storage-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/kafka-raft-7.4.9-ccs.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.4.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.2-1.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.4.9-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.4.9.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/kafka-metadata-7.4.9-ccs.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/commons-io-2.16.0.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/zookeeper-3.8.4.jar:/usr/share/java/cp-base-new/kafka_2.13-7.4.9-ccs.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/utility-belt-7.4.9-53.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.4.9.jar:/usr/share/java/cp-base-new/jose4j-0.9.5.jar (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 23:12:51,434] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 23:12:51,434] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 23:12:51,434] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 23:12:51,434] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 23:12:51,434] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 23:12:51,434] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 23:12:51,434] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 23:12:51,434] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 23:12:51,434] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 23:12:51,434] INFO Client environment:os.memory.free=493MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 23:12:51,434] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 23:12:51,434] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 23:12:51,437] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@19dc67c2 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 23:12:51,440] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2025-06-19 23:12:51,444] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2025-06-19 23:12:51,451] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-19 23:12:51,472] INFO Opening socket connection to server zookeeper/172.17.0.3:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-19 23:12:51,473] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-19 23:12:51,480] INFO Socket connection established, initiating session, client: /172.17.0.7:52332, server: zookeeper/172.17.0.3:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-19 23:12:51,503] INFO Session establishment complete on server zookeeper/172.17.0.3:2181, session id = 0x100000269ae0000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-19 23:12:51,628] INFO Session: 0x100000269ae0000 closed (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 23:12:51,628] INFO EventThread shut down for session: 0x100000269ae0000 (org.apache.zookeeper.ClientCnxn) kafka | Using log4j config /etc/kafka/log4j.properties kafka | ===> Launching ... kafka | ===> Launching kafka ... kafka | [2025-06-19 23:12:52,360] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) kafka | [2025-06-19 23:12:52,631] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2025-06-19 23:12:52,705] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) kafka | [2025-06-19 23:12:52,706] INFO starting (kafka.server.KafkaServer) kafka | [2025-06-19 23:12:52,706] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) kafka | [2025-06-19 23:12:52,720] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) kafka | [2025-06-19 23:12:52,727] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 23:12:52,727] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 23:12:52,727] INFO Client environment:java.version=11.0.26 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 23:12:52,727] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 23:12:52,727] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 23:12:52,727] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/kafka-storage-api-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/netty-common-4.1.115.Final.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-library-2.13.10.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/kafka-shell-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/reflections-0.9.12.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/kafka-clients-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-storage-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-raft-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-streams-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.0.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.2-1.jar:/usr/bin/../share/java/kafka/connect-runtime-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.115.Final.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/connect-api-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.115.Final.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.115.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.1.2.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.115.Final.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.115.Final.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/trogdor-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/commons-io-2.16.0.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-transforms-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/connect-mirror-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/kafka-tools-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.10.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.115.Final.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.4.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.115.Final.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.115.Final.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.57.v20241219.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 23:12:52,728] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 23:12:52,728] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 23:12:52,728] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 23:12:52,728] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 23:12:52,728] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 23:12:52,728] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 23:12:52,728] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 23:12:52,728] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 23:12:52,728] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 23:12:52,728] INFO Client environment:os.memory.free=1009MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 23:12:52,728] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 23:12:52,728] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 23:12:52,731] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@52851b44 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 23:12:52,735] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2025-06-19 23:12:52,741] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-19 23:12:52,743] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2025-06-19 23:12:52,746] INFO Opening socket connection to server zookeeper/172.17.0.3:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-19 23:12:52,754] INFO Socket connection established, initiating session, client: /172.17.0.7:52334, server: zookeeper/172.17.0.3:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-19 23:12:52,763] INFO Session establishment complete on server zookeeper/172.17.0.3:2181, session id = 0x100000269ae0001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-19 23:12:52,778] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2025-06-19 23:12:53,103] INFO Cluster ID = CerIRx5NRkKJ8UCUyyU6pA (kafka.server.KafkaServer) kafka | [2025-06-19 23:12:53,106] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) kafka | [2025-06-19 23:12:53,148] INFO KafkaConfig values: kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 kafka | alter.config.policy.class.name = null kafka | alter.log.dirs.replication.quota.window.num = 11 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 kafka | authorizer.class.name = kafka | auto.create.topics.enable = true kafka | auto.include.jmx.reporter = true kafka | auto.leader.rebalance.enable = true kafka | background.threads = 10 kafka | broker.heartbeat.interval.ms = 2000 kafka | broker.id = 1 kafka | broker.id.generation.enable = true kafka | broker.rack = null kafka | broker.session.timeout.ms = 9000 kafka | client.quota.callback.class = null kafka | compression.type = producer kafka | connection.failed.authentication.delay.ms = 100 kafka | connections.max.idle.ms = 600000 kafka | connections.max.reauth.ms = 0 kafka | control.plane.listener.name = null kafka | controlled.shutdown.enable = true kafka | controlled.shutdown.max.retries = 3 kafka | controlled.shutdown.retry.backoff.ms = 5000 kafka | controller.listener.names = null kafka | controller.quorum.append.linger.ms = 25 kafka | controller.quorum.election.backoff.max.ms = 1000 kafka | controller.quorum.election.timeout.ms = 1000 kafka | controller.quorum.fetch.timeout.ms = 2000 kafka | controller.quorum.request.timeout.ms = 2000 kafka | controller.quorum.retry.backoff.ms = 20 kafka | controller.quorum.voters = [] kafka | controller.quota.window.num = 11 kafka | controller.quota.window.size.seconds = 1 kafka | controller.socket.timeout.ms = 30000 kafka | create.topic.policy.class.name = null kafka | default.replication.factor = 1 kafka | delegation.token.expiry.check.interval.ms = 3600000 kafka | delegation.token.expiry.time.ms = 86400000 kafka | delegation.token.master.key = null kafka | delegation.token.max.lifetime.ms = 604800000 kafka | delegation.token.secret.key = null kafka | delete.records.purgatory.purge.interval.requests = 1 kafka | delete.topic.enable = true kafka | early.start.listeners = null kafka | fetch.max.bytes = 57671680 kafka | fetch.purgatory.purge.interval.requests = 1000 kafka | group.initial.rebalance.delay.ms = 3000 kafka | group.max.session.timeout.ms = 1800000 kafka | group.max.size = 2147483647 kafka | group.min.session.timeout.ms = 6000 kafka | initial.broker.registration.timeout.ms = 60000 kafka | inter.broker.listener.name = PLAINTEXT kafka | inter.broker.protocol.version = 3.4-IV0 kafka | kafka.metrics.polling.interval.secs = 10 kafka | kafka.metrics.reporters = [] kafka | leader.imbalance.check.interval.seconds = 300 kafka | leader.imbalance.per.broker.percentage = 10 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 kafka | log.cleaner.backoff.ms = 15000 kafka | log.cleaner.dedupe.buffer.size = 134217728 kafka | log.cleaner.delete.retention.ms = 86400000 kafka | log.cleaner.enable = true kafka | log.cleaner.io.buffer.load.factor = 0.9 kafka | log.cleaner.io.buffer.size = 524288 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 kafka | log.cleaner.min.cleanable.ratio = 0.5 kafka | log.cleaner.min.compaction.lag.ms = 0 kafka | log.cleaner.threads = 1 kafka | log.cleanup.policy = [delete] kafka | log.dir = /tmp/kafka-logs kafka | log.dirs = /var/lib/kafka/data kafka | log.flush.interval.messages = 9223372036854775807 kafka | log.flush.interval.ms = null kafka | log.flush.offset.checkpoint.interval.ms = 60000 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 kafka | log.index.interval.bytes = 4096 kafka | log.index.size.max.bytes = 10485760 kafka | log.message.downconversion.enable = true kafka | log.message.format.version = 3.0-IV1 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 kafka | log.message.timestamp.type = CreateTime kafka | log.preallocate = false kafka | log.retention.bytes = -1 kafka | log.retention.check.interval.ms = 300000 kafka | log.retention.hours = 168 kafka | log.retention.minutes = null kafka | log.retention.ms = null kafka | log.roll.hours = 168 kafka | log.roll.jitter.hours = 0 kafka | log.roll.jitter.ms = null kafka | log.roll.ms = null kafka | log.segment.bytes = 1073741824 kafka | log.segment.delete.delay.ms = 60000 kafka | max.connection.creation.rate = 2147483647 kafka | max.connections = 2147483647 kafka | max.connections.per.ip = 2147483647 kafka | max.connections.per.ip.overrides = kafka | max.incremental.fetch.session.cache.slots = 1000 kafka | message.max.bytes = 1048588 kafka | metadata.log.dir = null kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 kafka | metadata.log.max.snapshot.interval.ms = 3600000 kafka | metadata.log.segment.bytes = 1073741824 kafka | metadata.log.segment.min.bytes = 8388608 kafka | metadata.log.segment.ms = 604800000 kafka | metadata.max.idle.interval.ms = 500 kafka | metadata.max.retention.bytes = 104857600 kafka | metadata.max.retention.ms = 604800000 kafka | metric.reporters = [] kafka | metrics.num.samples = 2 kafka | metrics.recording.level = INFO kafka | metrics.sample.window.ms = 30000 kafka | min.insync.replicas = 1 kafka | node.id = 1 kafka | num.io.threads = 8 kafka | num.network.threads = 3 kafka | num.partitions = 1 kafka | num.recovery.threads.per.data.dir = 1 kafka | num.replica.alter.log.dirs.threads = null kafka | num.replica.fetchers = 1 kafka | offset.metadata.max.bytes = 4096 kafka | offsets.commit.required.acks = -1 kafka | offsets.commit.timeout.ms = 5000 kafka | offsets.load.buffer.size = 5242880 kafka | offsets.retention.check.interval.ms = 600000 kafka | offsets.retention.minutes = 10080 kafka | offsets.topic.compression.codec = 0 kafka | offsets.topic.num.partitions = 50 kafka | offsets.topic.replication.factor = 1 kafka | offsets.topic.segment.bytes = 104857600 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding kafka | password.encoder.iterations = 4096 kafka | password.encoder.key.length = 128 kafka | password.encoder.keyfactory.algorithm = null kafka | password.encoder.old.secret = null kafka | password.encoder.secret = null kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder kafka | process.roles = [] kafka | producer.id.expiration.check.interval.ms = 600000 kafka | producer.id.expiration.ms = 86400000 kafka | producer.purgatory.purge.interval.requests = 1000 kafka | queued.max.request.bytes = -1 kafka | queued.max.requests = 500 kafka | quota.window.num = 11 kafka | quota.window.size.seconds = 1 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 kafka | remote.log.manager.task.interval.ms = 30000 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 kafka | remote.log.manager.task.retry.backoff.ms = 500 kafka | remote.log.manager.task.retry.jitter = 0.2 kafka | remote.log.manager.thread.pool.size = 10 kafka | remote.log.metadata.manager.class.name = null kafka | remote.log.metadata.manager.class.path = null kafka | remote.log.metadata.manager.impl.prefix = null kafka | remote.log.metadata.manager.listener.name = null kafka | remote.log.reader.max.pending.tasks = 100 kafka | remote.log.reader.threads = 10 kafka | remote.log.storage.manager.class.name = null kafka | remote.log.storage.manager.class.path = null kafka | remote.log.storage.manager.impl.prefix = null kafka | remote.log.storage.system.enable = false kafka | replica.fetch.backoff.ms = 1000 kafka | replica.fetch.max.bytes = 1048576 kafka | replica.fetch.min.bytes = 1 kafka | replica.fetch.response.max.bytes = 10485760 kafka | replica.fetch.wait.max.ms = 500 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 kafka | replica.lag.time.max.ms = 30000 kafka | replica.selector.class = null kafka | replica.socket.receive.buffer.bytes = 65536 kafka | replica.socket.timeout.ms = 30000 kafka | replication.quota.window.num = 11 kafka | replication.quota.window.size.seconds = 1 kafka | request.timeout.ms = 30000 kafka | reserved.broker.max.id = 1000 kafka | sasl.client.callback.handler.class = null kafka | sasl.enabled.mechanisms = [GSSAPI] kafka | sasl.jaas.config = null kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | sasl.kerberos.min.time.before.relogin = 60000 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] kafka | sasl.kerberos.service.name = null kafka | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | sasl.login.callback.handler.class = null kafka | sasl.login.class = null kafka | sasl.login.connect.timeout.ms = null kafka | sasl.login.read.timeout.ms = null kafka | sasl.login.refresh.buffer.seconds = 300 kafka | sasl.login.refresh.min.period.seconds = 60 kafka | sasl.login.refresh.window.factor = 0.8 kafka | sasl.login.refresh.window.jitter = 0.05 kafka | sasl.login.retry.backoff.max.ms = 10000 kafka | sasl.login.retry.backoff.ms = 100 kafka | sasl.mechanism.controller.protocol = GSSAPI kafka | sasl.mechanism.inter.broker.protocol = GSSAPI kafka | sasl.oauthbearer.clock.skew.seconds = 30 kafka | sasl.oauthbearer.expected.audience = null kafka | sasl.oauthbearer.expected.issuer = null kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | sasl.oauthbearer.jwks.endpoint.url = null kafka | sasl.oauthbearer.scope.claim.name = scope kafka | sasl.oauthbearer.sub.claim.name = sub kafka | sasl.oauthbearer.token.endpoint.url = null kafka | sasl.server.callback.handler.class = null kafka | sasl.server.max.receive.size = 524288 kafka | security.inter.broker.protocol = PLAINTEXT kafka | security.providers = null kafka | socket.connection.setup.timeout.max.ms = 30000 kafka | socket.connection.setup.timeout.ms = 10000 kafka | socket.listen.backlog.size = 50 kafka | socket.receive.buffer.bytes = 102400 kafka | socket.request.max.bytes = 104857600 kafka | socket.send.buffer.bytes = 102400 kafka | ssl.cipher.suites = [] kafka | ssl.client.auth = none kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | ssl.endpoint.identification.algorithm = https kafka | ssl.engine.factory.class = null kafka | ssl.key.password = null kafka | ssl.keymanager.algorithm = SunX509 kafka | ssl.keystore.certificate.chain = null kafka | ssl.keystore.key = null kafka | ssl.keystore.location = null kafka | ssl.keystore.password = null kafka | ssl.keystore.type = JKS kafka | ssl.principal.mapping.rules = DEFAULT kafka | ssl.protocol = TLSv1.3 kafka | ssl.provider = null kafka | ssl.secure.random.implementation = null kafka | ssl.trustmanager.algorithm = PKIX kafka | ssl.truststore.certificates = null kafka | ssl.truststore.location = null kafka | ssl.truststore.password = null kafka | ssl.truststore.type = JKS kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 kafka | transaction.max.timeout.ms = 900000 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 kafka | transaction.state.log.load.buffer.size = 5242880 kafka | transaction.state.log.min.isr = 2 kafka | transaction.state.log.num.partitions = 50 kafka | transaction.state.log.replication.factor = 3 kafka | transaction.state.log.segment.bytes = 104857600 kafka | transactional.id.expiration.ms = 604800000 kafka | unclean.leader.election.enable = false kafka | zookeeper.clientCnxnSocket = null kafka | zookeeper.connect = zookeeper:2181 kafka | zookeeper.connection.timeout.ms = null kafka | zookeeper.max.in.flight.requests = 10 kafka | zookeeper.metadata.migration.enable = false kafka | zookeeper.session.timeout.ms = 18000 kafka | zookeeper.set.acl = false kafka | zookeeper.ssl.cipher.suites = null kafka | zookeeper.ssl.client.enable = false kafka | zookeeper.ssl.crl.enable = false kafka | zookeeper.ssl.enabled.protocols = null kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS kafka | zookeeper.ssl.keystore.location = null kafka | zookeeper.ssl.keystore.password = null kafka | zookeeper.ssl.keystore.type = null kafka | zookeeper.ssl.ocsp.enable = false kafka | zookeeper.ssl.protocol = TLSv1.2 kafka | zookeeper.ssl.truststore.location = null kafka | zookeeper.ssl.truststore.password = null kafka | zookeeper.ssl.truststore.type = null kafka | (kafka.server.KafkaConfig) kafka | [2025-06-19 23:12:53,186] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-19 23:12:53,189] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-19 23:12:53,186] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-19 23:12:53,190] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-19 23:12:53,222] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) kafka | [2025-06-19 23:12:53,224] INFO Attempting recovery for all logs in /var/lib/kafka/data since no clean shutdown file was found (kafka.log.LogManager) kafka | [2025-06-19 23:12:53,236] INFO Loaded 0 logs in 15ms. (kafka.log.LogManager) kafka | [2025-06-19 23:12:53,237] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) kafka | [2025-06-19 23:12:53,239] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) kafka | [2025-06-19 23:12:53,249] INFO Starting the log cleaner (kafka.log.LogCleaner) kafka | [2025-06-19 23:12:53,301] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner) kafka | [2025-06-19 23:12:53,315] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) kafka | [2025-06-19 23:12:53,326] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) kafka | [2025-06-19 23:12:53,366] INFO [BrokerToControllerChannelManager broker=1 name=forwarding]: Starting (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-19 23:12:53,682] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2025-06-19 23:12:53,685] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) kafka | [2025-06-19 23:12:53,706] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) kafka | [2025-06-19 23:12:53,706] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2025-06-19 23:12:53,706] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) kafka | [2025-06-19 23:12:53,710] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) kafka | [2025-06-19 23:12:53,714] INFO [BrokerToControllerChannelManager broker=1 name=alterPartition]: Starting (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-19 23:12:53,729] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-19 23:12:53,731] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-19 23:12:53,737] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-19 23:12:53,737] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-19 23:12:53,749] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) kafka | [2025-06-19 23:12:53,773] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) kafka | [2025-06-19 23:12:53,796] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1750374773787,1750374773787,1,0,0,72057604400873473,258,0,27 kafka | (kafka.zk.KafkaZkClient) kafka | [2025-06-19 23:12:53,796] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) kafka | [2025-06-19 23:12:53,862] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) kafka | [2025-06-19 23:12:53,875] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-19 23:12:53,883] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-19 23:12:53,885] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-19 23:12:53,896] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) kafka | [2025-06-19 23:12:53,910] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) kafka | [2025-06-19 23:12:53,914] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) kafka | [2025-06-19 23:12:53,915] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 23:12:53,919] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) kafka | [2025-06-19 23:12:53,937] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 23:12:53,958] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2025-06-19 23:12:53,963] INFO [Transaction Marker Channel Manager 1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) kafka | [2025-06-19 23:12:53,963] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2025-06-19 23:12:53,965] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) kafka | [2025-06-19 23:12:53,966] INFO [MetadataCache brokerId=1] Updated cache from existing to latest FinalizedFeaturesAndEpoch(features=Map(), epoch=0). (kafka.server.metadata.ZkMetadataCache) kafka | [2025-06-19 23:12:53,969] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) kafka | [2025-06-19 23:12:53,971] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) kafka | [2025-06-19 23:12:53,973] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) kafka | [2025-06-19 23:12:54,000] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-19 23:12:54,002] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) kafka | [2025-06-19 23:12:54,010] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) kafka | [2025-06-19 23:12:54,020] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) kafka | [2025-06-19 23:12:54,023] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) kafka | [2025-06-19 23:12:54,030] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) kafka | [2025-06-19 23:12:54,030] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) kafka | [2025-06-19 23:12:54,031] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) kafka | [2025-06-19 23:12:54,031] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) kafka | [2025-06-19 23:12:54,031] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) kafka | [2025-06-19 23:12:54,034] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) kafka | [2025-06-19 23:12:54,034] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) kafka | [2025-06-19 23:12:54,034] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) kafka | [2025-06-19 23:12:54,035] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) kafka | [2025-06-19 23:12:54,035] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) kafka | [2025-06-19 23:12:54,037] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) kafka | [2025-06-19 23:12:54,038] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) kafka | [2025-06-19 23:12:54,045] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-19 23:12:54,045] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-19 23:12:54,052] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-19 23:12:54,055] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-19 23:12:54,058] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) kafka | [2025-06-19 23:12:54,059] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) kafka | [2025-06-19 23:12:54,060] INFO Kafka version: 7.4.9-ccs (org.apache.kafka.common.utils.AppInfoParser) kafka | [2025-06-19 23:12:54,060] INFO Kafka commitId: 07d888cfc0d14765fe5557324f1fdb4ada6698a5 (org.apache.kafka.common.utils.AppInfoParser) kafka | [2025-06-19 23:12:54,060] INFO Kafka startTimeMs: 1750374774051 (org.apache.kafka.common.utils.AppInfoParser) kafka | [2025-06-19 23:12:54,062] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) kafka | [2025-06-19 23:12:54,063] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) kafka | [2025-06-19 23:12:54,063] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) kafka | [2025-06-19 23:12:54,072] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) kafka | [2025-06-19 23:12:54,073] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) kafka | [2025-06-19 23:12:54,081] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) kafka | [2025-06-19 23:12:54,084] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) kafka | [2025-06-19 23:12:54,085] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) kafka | [2025-06-19 23:12:54,086] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) kafka | [2025-06-19 23:12:54,098] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) kafka | [2025-06-19 23:12:54,134] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-19 23:12:54,183] INFO [BrokerToControllerChannelManager broker=1 name=forwarding]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-19 23:12:54,218] INFO [BrokerToControllerChannelManager broker=1 name=alterPartition]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-19 23:12:59,099] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) kafka | [2025-06-19 23:12:59,100] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) kafka | [2025-06-19 23:13:26,459] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) kafka | [2025-06-19 23:13:26,466] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2025-06-19 23:13:26,467] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2025-06-19 23:13:26,468] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) kafka | [2025-06-19 23:13:26,508] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(WYsuPorlTP-r6Vrybeg02w),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(ud4H8XnXSw6Ff7RxtxwxHw),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) kafka | [2025-06-19 23:13:26,510] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) kafka | [2025-06-19 23:13:26,513] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 23:13:26,513] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 23:13:26,513] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 23:13:26,513] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 23:13:26,513] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 23:13:26,513] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 23:13:26,513] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 23:13:26,513] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 23:13:26,513] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 23:13:26,513] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 23:13:26,513] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 23:13:26,513] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 23:13:26,513] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 23:13:26,513] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 23:13:26,513] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 23:13:26,513] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 23:13:26,513] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 23:13:26,513] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 23:13:26,513] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 23:13:26,514] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 23:13:26,514] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 23:13:26,514] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 23:13:26,514] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 23:13:26,514] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 23:13:26,514] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 23:13:26,514] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 23:13:26,514] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 23:13:26,514] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 23:13:26,514] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 23:13:26,514] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 23:13:26,514] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 23:13:26,514] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 23:13:26,514] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 23:13:26,514] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 23:13:26,514] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 23:13:26,514] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 23:13:26,514] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 23:13:26,514] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 23:13:26,514] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 23:13:26,514] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 23:13:26,514] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 23:13:26,514] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 23:13:26,514] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 23:13:26,514] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 23:13:26,514] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 23:13:26,514] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 23:13:26,514] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 23:13:26,514] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 23:13:26,514] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 23:13:26,515] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 23:13:26,515] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 23:13:26,515] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-19 23:13:26,520] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 23:13:26,520] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 23:13:26,520] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 23:13:26,520] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 23:13:26,520] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 23:13:26,520] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 23:13:26,520] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 23:13:26,520] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 23:13:26,520] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 23:13:26,520] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 23:13:26,520] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 23:13:26,520] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 23:13:26,520] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 23:13:26,520] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 23:13:26,520] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 23:13:26,520] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 23:13:26,520] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 23:13:26,520] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 23:13:26,520] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 23:13:26,520] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 23:13:26,520] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 23:13:26,521] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 23:13:26,521] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 23:13:26,521] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 23:13:26,521] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 23:13:26,521] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 23:13:26,521] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 23:13:26,521] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 23:13:26,521] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 23:13:26,521] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 23:13:26,521] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 23:13:26,521] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 23:13:26,521] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 23:13:26,521] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 23:13:26,521] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 23:13:26,521] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 23:13:26,521] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 23:13:26,521] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 23:13:26,521] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 23:13:26,521] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 23:13:26,521] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 23:13:26,521] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 23:13:26,521] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 23:13:26,521] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 23:13:26,521] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 23:13:26,521] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 23:13:26,521] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 23:13:26,521] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 23:13:26,521] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 23:13:26,521] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 23:13:26,521] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 23:13:26,521] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-19 23:13:26,660] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 23:13:26,660] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 23:13:26,660] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 23:13:26,660] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 23:13:26,661] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 23:13:26,661] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 23:13:26,661] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 23:13:26,661] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 23:13:26,661] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 23:13:26,661] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 23:13:26,661] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 23:13:26,661] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 23:13:26,661] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 23:13:26,661] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 23:13:26,661] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 23:13:26,661] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 23:13:26,661] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 23:13:26,661] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 23:13:26,661] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 23:13:26,661] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 23:13:26,661] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 23:13:26,661] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 23:13:26,661] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 23:13:26,661] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 23:13:26,662] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 23:13:26,662] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 23:13:26,662] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 23:13:26,662] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 23:13:26,662] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 23:13:26,662] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 23:13:26,662] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 23:13:26,662] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 23:13:26,662] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 23:13:26,662] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 23:13:26,662] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 23:13:26,662] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 23:13:26,662] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 23:13:26,662] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 23:13:26,662] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 23:13:26,662] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 23:13:26,662] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 23:13:26,662] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 23:13:26,662] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 23:13:26,663] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 23:13:26,663] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 23:13:26,663] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 23:13:26,663] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 23:13:26,663] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 23:13:26,663] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 23:13:26,663] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 23:13:26,663] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 23:13:26,666] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) kafka | [2025-06-19 23:13:26,666] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) kafka | [2025-06-19 23:13:26,666] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) kafka | [2025-06-19 23:13:26,666] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) kafka | [2025-06-19 23:13:26,666] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) kafka | [2025-06-19 23:13:26,666] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) kafka | [2025-06-19 23:13:26,666] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) kafka | [2025-06-19 23:13:26,666] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) kafka | [2025-06-19 23:13:26,666] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) kafka | [2025-06-19 23:13:26,666] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) kafka | [2025-06-19 23:13:26,666] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) kafka | [2025-06-19 23:13:26,666] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) kafka | [2025-06-19 23:13:26,666] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) kafka | [2025-06-19 23:13:26,666] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) kafka | [2025-06-19 23:13:26,666] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) kafka | [2025-06-19 23:13:26,666] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) kafka | [2025-06-19 23:13:26,666] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) kafka | [2025-06-19 23:13:26,666] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) kafka | [2025-06-19 23:13:26,666] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) kafka | [2025-06-19 23:13:26,666] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) kafka | [2025-06-19 23:13:26,666] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) kafka | [2025-06-19 23:13:26,666] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) kafka | [2025-06-19 23:13:26,666] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) kafka | [2025-06-19 23:13:26,667] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) kafka | [2025-06-19 23:13:26,667] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) kafka | [2025-06-19 23:13:26,667] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) kafka | [2025-06-19 23:13:26,667] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) kafka | [2025-06-19 23:13:26,667] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) kafka | [2025-06-19 23:13:26,667] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) kafka | [2025-06-19 23:13:26,667] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) kafka | [2025-06-19 23:13:26,667] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) kafka | [2025-06-19 23:13:26,667] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) kafka | [2025-06-19 23:13:26,667] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) kafka | [2025-06-19 23:13:26,667] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) kafka | [2025-06-19 23:13:26,667] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) kafka | [2025-06-19 23:13:26,667] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) kafka | [2025-06-19 23:13:26,667] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) kafka | [2025-06-19 23:13:26,667] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) kafka | [2025-06-19 23:13:26,667] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) kafka | [2025-06-19 23:13:26,667] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) kafka | [2025-06-19 23:13:26,667] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) kafka | [2025-06-19 23:13:26,667] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) kafka | [2025-06-19 23:13:26,667] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) kafka | [2025-06-19 23:13:26,667] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) kafka | [2025-06-19 23:13:26,667] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) kafka | [2025-06-19 23:13:26,667] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) kafka | [2025-06-19 23:13:26,667] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) kafka | [2025-06-19 23:13:26,668] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) kafka | [2025-06-19 23:13:26,668] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) kafka | [2025-06-19 23:13:26,668] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) kafka | [2025-06-19 23:13:26,668] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) kafka | [2025-06-19 23:13:26,668] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) kafka | [2025-06-19 23:13:26,671] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) kafka | [2025-06-19 23:13:26,672] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 23:13:26,672] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 23:13:26,672] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 23:13:26,673] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 23:13:26,673] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 23:13:26,673] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 23:13:26,673] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 23:13:26,673] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 23:13:26,673] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 23:13:26,673] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 23:13:26,673] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 23:13:26,673] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 23:13:26,673] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 23:13:26,673] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 23:13:26,673] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 23:13:26,673] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 23:13:26,673] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 23:13:26,673] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 23:13:26,673] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 23:13:26,673] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 23:13:26,673] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 23:13:26,673] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 23:13:26,673] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 23:13:26,673] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 23:13:26,673] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 23:13:26,673] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 23:13:26,673] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 23:13:26,673] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 23:13:26,673] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 23:13:26,673] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 23:13:26,673] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 23:13:26,673] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 23:13:26,673] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 23:13:26,673] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 23:13:26,673] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 23:13:26,673] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 23:13:26,674] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 23:13:26,674] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 23:13:26,674] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 23:13:26,674] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 23:13:26,674] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 23:13:26,674] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 23:13:26,674] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 23:13:26,674] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 23:13:26,674] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 23:13:26,674] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 23:13:26,674] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 23:13:26,674] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 23:13:26,674] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 23:13:26,674] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 23:13:26,674] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 23:13:26,674] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-19 23:13:26,679] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) kafka | [2025-06-19 23:13:26,680] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 23:13:26,680] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 23:13:26,680] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 23:13:26,680] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 23:13:26,680] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 23:13:26,680] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 23:13:26,680] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 23:13:26,680] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 23:13:26,680] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 23:13:26,680] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 23:13:26,680] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 23:13:26,680] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 23:13:26,680] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 23:13:26,680] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 23:13:26,680] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 23:13:26,680] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 23:13:26,680] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 23:13:26,680] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 23:13:26,680] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 23:13:26,680] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 23:13:26,680] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 23:13:26,680] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 23:13:26,680] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 23:13:26,680] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 23:13:26,681] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 23:13:26,681] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 23:13:26,681] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 23:13:26,681] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 23:13:26,681] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 23:13:26,681] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 23:13:26,681] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 23:13:26,681] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 23:13:26,681] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 23:13:26,681] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 23:13:26,681] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 23:13:26,681] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 23:13:26,681] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 23:13:26,681] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 23:13:26,681] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 23:13:26,681] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 23:13:26,681] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 23:13:26,681] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 23:13:26,681] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 23:13:26,681] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 23:13:26,681] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 23:13:26,681] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 23:13:26,681] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 23:13:26,681] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 23:13:26,681] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 23:13:26,681] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 23:13:26,682] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 23:13:26,720] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2025-06-19 23:13:26,720] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2025-06-19 23:13:26,720] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2025-06-19 23:13:26,720] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2025-06-19 23:13:26,720] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2025-06-19 23:13:26,720] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2025-06-19 23:13:26,720] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2025-06-19 23:13:26,720] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2025-06-19 23:13:26,720] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2025-06-19 23:13:26,720] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2025-06-19 23:13:26,720] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2025-06-19 23:13:26,720] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2025-06-19 23:13:26,720] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2025-06-19 23:13:26,720] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2025-06-19 23:13:26,720] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2025-06-19 23:13:26,720] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2025-06-19 23:13:26,721] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2025-06-19 23:13:26,721] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2025-06-19 23:13:26,721] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2025-06-19 23:13:26,721] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2025-06-19 23:13:26,721] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2025-06-19 23:13:26,721] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2025-06-19 23:13:26,721] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2025-06-19 23:13:26,721] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2025-06-19 23:13:26,721] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2025-06-19 23:13:26,721] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2025-06-19 23:13:26,721] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2025-06-19 23:13:26,721] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2025-06-19 23:13:26,721] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2025-06-19 23:13:26,721] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2025-06-19 23:13:26,721] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2025-06-19 23:13:26,721] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2025-06-19 23:13:26,721] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2025-06-19 23:13:26,721] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2025-06-19 23:13:26,721] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2025-06-19 23:13:26,721] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2025-06-19 23:13:26,721] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2025-06-19 23:13:26,721] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2025-06-19 23:13:26,721] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2025-06-19 23:13:26,721] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2025-06-19 23:13:26,721] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2025-06-19 23:13:26,721] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2025-06-19 23:13:26,721] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2025-06-19 23:13:26,721] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2025-06-19 23:13:26,721] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2025-06-19 23:13:26,721] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2025-06-19 23:13:26,721] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2025-06-19 23:13:26,721] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2025-06-19 23:13:26,721] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2025-06-19 23:13:26,721] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2025-06-19 23:13:26,721] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2025-06-19 23:13:26,722] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) kafka | [2025-06-19 23:13:26,723] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) kafka | [2025-06-19 23:13:26,781] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 23:13:26,794] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 23:13:26,797] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:26,798] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:26,799] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(ud4H8XnXSw6Ff7RxtxwxHw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 23:13:26,817] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 23:13:26,819] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 23:13:26,819] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:26,819] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:26,819] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(ud4H8XnXSw6Ff7RxtxwxHw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 23:13:26,829] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 23:13:26,830] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 23:13:26,830] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:26,830] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:26,830] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(ud4H8XnXSw6Ff7RxtxwxHw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 23:13:26,857] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 23:13:26,859] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 23:13:26,859] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:26,859] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:26,859] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(ud4H8XnXSw6Ff7RxtxwxHw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 23:13:26,868] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 23:13:26,869] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 23:13:26,869] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:26,869] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:26,869] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(ud4H8XnXSw6Ff7RxtxwxHw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 23:13:26,876] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 23:13:26,877] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 23:13:26,877] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:26,877] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:26,877] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(ud4H8XnXSw6Ff7RxtxwxHw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 23:13:26,883] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 23:13:26,884] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 23:13:26,884] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:26,884] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:26,884] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(ud4H8XnXSw6Ff7RxtxwxHw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 23:13:26,893] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 23:13:26,893] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 23:13:26,893] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:26,894] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:26,894] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(ud4H8XnXSw6Ff7RxtxwxHw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 23:13:26,903] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 23:13:26,903] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 23:13:26,903] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:26,903] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:26,904] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(ud4H8XnXSw6Ff7RxtxwxHw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 23:13:26,912] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 23:13:26,913] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 23:13:26,913] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:26,913] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:26,913] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(ud4H8XnXSw6Ff7RxtxwxHw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 23:13:26,924] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 23:13:26,924] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 23:13:26,925] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:26,925] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:26,925] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(ud4H8XnXSw6Ff7RxtxwxHw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 23:13:26,939] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 23:13:26,943] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 23:13:26,943] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:26,943] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:26,944] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(ud4H8XnXSw6Ff7RxtxwxHw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 23:13:26,955] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 23:13:26,955] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 23:13:26,956] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:26,956] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:26,956] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(ud4H8XnXSw6Ff7RxtxwxHw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 23:13:26,965] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 23:13:26,969] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 23:13:26,969] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:26,969] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:26,970] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(ud4H8XnXSw6Ff7RxtxwxHw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 23:13:26,980] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 23:13:26,981] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 23:13:26,982] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:26,982] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:26,982] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(ud4H8XnXSw6Ff7RxtxwxHw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 23:13:26,990] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 23:13:26,991] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 23:13:26,991] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:26,991] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:26,991] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(ud4H8XnXSw6Ff7RxtxwxHw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 23:13:27,001] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 23:13:27,002] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 23:13:27,002] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,002] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,002] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(ud4H8XnXSw6Ff7RxtxwxHw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 23:13:27,010] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 23:13:27,011] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 23:13:27,011] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,011] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,011] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(ud4H8XnXSw6Ff7RxtxwxHw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 23:13:27,025] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 23:13:27,026] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 23:13:27,026] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,026] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,026] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(ud4H8XnXSw6Ff7RxtxwxHw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 23:13:27,035] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 23:13:27,035] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 23:13:27,035] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,035] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,035] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(ud4H8XnXSw6Ff7RxtxwxHw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 23:13:27,042] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 23:13:27,042] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 23:13:27,042] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,042] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,042] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(ud4H8XnXSw6Ff7RxtxwxHw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 23:13:27,054] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 23:13:27,055] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 23:13:27,056] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,056] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,056] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(ud4H8XnXSw6Ff7RxtxwxHw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 23:13:27,074] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 23:13:27,075] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 23:13:27,075] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,075] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,075] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(ud4H8XnXSw6Ff7RxtxwxHw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 23:13:27,084] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 23:13:27,086] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 23:13:27,086] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,086] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,086] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(ud4H8XnXSw6Ff7RxtxwxHw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 23:13:27,094] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 23:13:27,094] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 23:13:27,095] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,095] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,095] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(ud4H8XnXSw6Ff7RxtxwxHw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 23:13:27,103] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 23:13:27,105] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 23:13:27,105] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,105] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,105] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(ud4H8XnXSw6Ff7RxtxwxHw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 23:13:27,112] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 23:13:27,113] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 23:13:27,113] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,113] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,113] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(ud4H8XnXSw6Ff7RxtxwxHw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 23:13:27,120] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 23:13:27,121] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 23:13:27,121] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,121] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,121] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(ud4H8XnXSw6Ff7RxtxwxHw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 23:13:27,129] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 23:13:27,130] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 23:13:27,130] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,130] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,130] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(ud4H8XnXSw6Ff7RxtxwxHw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 23:13:27,137] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 23:13:27,139] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 23:13:27,139] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,139] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,139] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(ud4H8XnXSw6Ff7RxtxwxHw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 23:13:27,151] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 23:13:27,153] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 23:13:27,154] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,154] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,154] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(ud4H8XnXSw6Ff7RxtxwxHw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 23:13:27,170] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 23:13:27,171] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 23:13:27,171] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,171] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,172] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(ud4H8XnXSw6Ff7RxtxwxHw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 23:13:27,183] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 23:13:27,184] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 23:13:27,184] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,184] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,184] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(ud4H8XnXSw6Ff7RxtxwxHw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 23:13:27,192] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 23:13:27,193] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 23:13:27,193] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,193] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,193] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(ud4H8XnXSw6Ff7RxtxwxHw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 23:13:27,207] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 23:13:27,209] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) kafka | [2025-06-19 23:13:27,209] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,209] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,209] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(WYsuPorlTP-r6Vrybeg02w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 23:13:27,223] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 23:13:27,224] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 23:13:27,224] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,224] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,224] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(ud4H8XnXSw6Ff7RxtxwxHw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 23:13:27,233] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 23:13:27,234] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 23:13:27,234] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,235] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,235] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(ud4H8XnXSw6Ff7RxtxwxHw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 23:13:27,242] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 23:13:27,243] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 23:13:27,243] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,243] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,243] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(ud4H8XnXSw6Ff7RxtxwxHw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 23:13:27,250] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 23:13:27,250] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 23:13:27,250] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,250] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,251] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(ud4H8XnXSw6Ff7RxtxwxHw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 23:13:27,257] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 23:13:27,258] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 23:13:27,258] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,258] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,259] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(ud4H8XnXSw6Ff7RxtxwxHw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 23:13:27,266] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 23:13:27,269] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 23:13:27,269] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,269] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,270] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(ud4H8XnXSw6Ff7RxtxwxHw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 23:13:27,279] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 23:13:27,280] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 23:13:27,280] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,280] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,280] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(ud4H8XnXSw6Ff7RxtxwxHw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 23:13:27,295] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 23:13:27,296] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 23:13:27,296] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,296] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,296] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(ud4H8XnXSw6Ff7RxtxwxHw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 23:13:27,302] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 23:13:27,302] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 23:13:27,302] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,302] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,302] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(ud4H8XnXSw6Ff7RxtxwxHw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 23:13:27,307] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 23:13:27,308] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 23:13:27,308] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,308] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,309] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(ud4H8XnXSw6Ff7RxtxwxHw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 23:13:27,315] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 23:13:27,316] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 23:13:27,316] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,316] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,316] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(ud4H8XnXSw6Ff7RxtxwxHw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 23:13:27,330] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 23:13:27,330] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 23:13:27,331] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,331] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,331] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(ud4H8XnXSw6Ff7RxtxwxHw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 23:13:27,344] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 23:13:27,344] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 23:13:27,344] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,345] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,345] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(ud4H8XnXSw6Ff7RxtxwxHw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 23:13:27,356] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 23:13:27,357] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 23:13:27,357] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,358] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,358] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(ud4H8XnXSw6Ff7RxtxwxHw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 23:13:27,368] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 23:13:27,370] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 23:13:27,370] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,370] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,371] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(ud4H8XnXSw6Ff7RxtxwxHw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 23:13:27,378] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 23:13:27,378] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 23:13:27,378] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,378] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 23:13:27,378] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(ud4H8XnXSw6Ff7RxtxwxHw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 23:13:27,386] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2025-06-19 23:13:27,386] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2025-06-19 23:13:27,386] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2025-06-19 23:13:27,386] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2025-06-19 23:13:27,386] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2025-06-19 23:13:27,386] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2025-06-19 23:13:27,386] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2025-06-19 23:13:27,386] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2025-06-19 23:13:27,386] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2025-06-19 23:13:27,386] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2025-06-19 23:13:27,386] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2025-06-19 23:13:27,386] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2025-06-19 23:13:27,386] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2025-06-19 23:13:27,386] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2025-06-19 23:13:27,386] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2025-06-19 23:13:27,386] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2025-06-19 23:13:27,386] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2025-06-19 23:13:27,386] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2025-06-19 23:13:27,386] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2025-06-19 23:13:27,386] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2025-06-19 23:13:27,386] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2025-06-19 23:13:27,386] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2025-06-19 23:13:27,386] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2025-06-19 23:13:27,386] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2025-06-19 23:13:27,386] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2025-06-19 23:13:27,386] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2025-06-19 23:13:27,386] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2025-06-19 23:13:27,386] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2025-06-19 23:13:27,386] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2025-06-19 23:13:27,386] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2025-06-19 23:13:27,386] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2025-06-19 23:13:27,386] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2025-06-19 23:13:27,386] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2025-06-19 23:13:27,386] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2025-06-19 23:13:27,387] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2025-06-19 23:13:27,387] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2025-06-19 23:13:27,387] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2025-06-19 23:13:27,387] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2025-06-19 23:13:27,387] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2025-06-19 23:13:27,387] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2025-06-19 23:13:27,387] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2025-06-19 23:13:27,387] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2025-06-19 23:13:27,387] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2025-06-19 23:13:27,387] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2025-06-19 23:13:27,387] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2025-06-19 23:13:27,387] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2025-06-19 23:13:27,387] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2025-06-19 23:13:27,387] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2025-06-19 23:13:27,387] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2025-06-19 23:13:27,387] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2025-06-19 23:13:27,387] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2025-06-19 23:13:27,393] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 23:13:27,395] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,396] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 23:13:27,397] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,397] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 23:13:27,397] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,397] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 23:13:27,397] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,397] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 23:13:27,397] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,397] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 23:13:27,397] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,397] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 23:13:27,397] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,397] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 23:13:27,397] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,397] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 23:13:27,397] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,397] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 23:13:27,397] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,397] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 23:13:27,397] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,397] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 23:13:27,397] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,397] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 23:13:27,397] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,397] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 23:13:27,397] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,397] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 23:13:27,397] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,397] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 23:13:27,397] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,397] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 23:13:27,397] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,397] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 23:13:27,397] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,397] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 23:13:27,397] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,397] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 23:13:27,397] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,397] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 23:13:27,397] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,397] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 23:13:27,397] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,397] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 23:13:27,397] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,397] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 23:13:27,397] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,397] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 23:13:27,397] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,397] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 23:13:27,397] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,397] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 23:13:27,397] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,397] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 23:13:27,397] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,397] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 23:13:27,397] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,397] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 23:13:27,397] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,397] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 23:13:27,397] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,397] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 23:13:27,397] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,397] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 23:13:27,397] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,397] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 23:13:27,397] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,398] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 23:13:27,398] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,398] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 23:13:27,398] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,398] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 23:13:27,398] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,398] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 23:13:27,398] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,398] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 23:13:27,398] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,398] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 23:13:27,398] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,398] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 23:13:27,398] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,398] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 23:13:27,398] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,398] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 23:13:27,398] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,398] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 23:13:27,398] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,398] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 23:13:27,398] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,398] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 23:13:27,398] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,398] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 23:13:27,398] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,398] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 23:13:27,398] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,398] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 23:13:27,398] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,398] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 23:13:27,398] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,400] INFO [Broker id=1] Finished LeaderAndIsr request in 723ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) kafka | [2025-06-19 23:13:27,403] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 7 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,406] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,406] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,406] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,406] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,407] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 10 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,407] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,407] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,407] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,407] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,407] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,407] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,407] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,407] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,407] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=ud4H8XnXSw6Ff7RxtxwxHw, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=WYsuPorlTP-r6Vrybeg02w, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-19 23:13:27,407] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,408] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,408] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,408] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,408] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,408] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,408] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,408] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,408] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,408] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,409] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,409] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,409] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,409] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,409] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,409] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,409] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,409] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,410] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 13 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,410] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,410] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,410] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,410] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,410] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,410] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,410] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,410] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,414] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 16 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,414] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,414] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,414] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,414] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,415] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,415] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,415] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,415] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 23:13:27,420] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 23:13:27,422] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 23:13:27,422] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 23:13:27,422] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 23:13:27,422] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 23:13:27,422] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 23:13:27,422] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 23:13:27,422] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 23:13:27,422] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 23:13:27,422] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 23:13:27,422] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 23:13:27,422] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 23:13:27,422] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 23:13:27,422] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 23:13:27,422] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 23:13:27,422] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 23:13:27,422] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 23:13:27,422] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 23:13:27,422] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 23:13:27,422] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 23:13:27,422] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 23:13:27,422] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 23:13:27,422] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 23:13:27,422] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 23:13:27,422] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 23:13:27,422] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 23:13:27,422] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 23:13:27,422] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 23:13:27,422] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 23:13:27,422] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 23:13:27,422] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 23:13:27,422] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 23:13:27,422] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 23:13:27,422] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 23:13:27,422] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 23:13:27,422] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 23:13:27,422] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 23:13:27,422] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 23:13:27,422] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 23:13:27,422] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 23:13:27,422] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 23:13:27,422] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 23:13:27,423] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 23:13:27,423] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 23:13:27,423] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 23:13:27,423] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 23:13:27,423] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 23:13:27,423] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 23:13:27,423] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 23:13:27,423] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 23:13:27,423] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 23:13:27,423] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 23:13:27,424] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-19 23:13:28,074] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 02e21be1-e894-44c3-898e-596b23538497 in Empty state. Created a new member id consumer-02e21be1-e894-44c3-898e-596b23538497-3-e7d5e8b8-9898-453c-a3e2-45023dbc8ad5 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 23:13:28,084] INFO [GroupCoordinator 1]: Preparing to rebalance group 02e21be1-e894-44c3-898e-596b23538497 in state PreparingRebalance with old generation 0 (__consumer_offsets-1) (reason: Adding new member consumer-02e21be1-e894-44c3-898e-596b23538497-3-e7d5e8b8-9898-453c-a3e2-45023dbc8ad5 with group instance id None; client reason: need to re-join with the given member-id: consumer-02e21be1-e894-44c3-898e-596b23538497-3-e7d5e8b8-9898-453c-a3e2-45023dbc8ad5) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 23:13:28,174] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-f6dcbd9f-fe9b-4690-ba7e-683268a8f3a2 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 23:13:28,176] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-f6dcbd9f-fe9b-4690-ba7e-683268a8f3a2 with group instance id None; client reason: need to re-join with the given member-id: consumer-policy-pap-4-f6dcbd9f-fe9b-4690-ba7e-683268a8f3a2) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 23:13:28,641] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 183384fe-6f07-4ef7-aa4e-ce74cc6f79fa in Empty state. Created a new member id consumer-183384fe-6f07-4ef7-aa4e-ce74cc6f79fa-2-48c54a29-7e8f-41c3-b98d-f62cd6aca337 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 23:13:28,644] INFO [GroupCoordinator 1]: Preparing to rebalance group 183384fe-6f07-4ef7-aa4e-ce74cc6f79fa in state PreparingRebalance with old generation 0 (__consumer_offsets-18) (reason: Adding new member consumer-183384fe-6f07-4ef7-aa4e-ce74cc6f79fa-2-48c54a29-7e8f-41c3-b98d-f62cd6aca337 with group instance id None; client reason: need to re-join with the given member-id: consumer-183384fe-6f07-4ef7-aa4e-ce74cc6f79fa-2-48c54a29-7e8f-41c3-b98d-f62cd6aca337) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 23:13:31,096] INFO [GroupCoordinator 1]: Stabilized group 02e21be1-e894-44c3-898e-596b23538497 generation 1 (__consumer_offsets-1) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 23:13:31,116] INFO [GroupCoordinator 1]: Assignment received from leader consumer-02e21be1-e894-44c3-898e-596b23538497-3-e7d5e8b8-9898-453c-a3e2-45023dbc8ad5 for group 02e21be1-e894-44c3-898e-596b23538497 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 23:13:31,180] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 23:13:31,190] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-f6dcbd9f-fe9b-4690-ba7e-683268a8f3a2 for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 23:13:31,646] INFO [GroupCoordinator 1]: Stabilized group 183384fe-6f07-4ef7-aa4e-ce74cc6f79fa generation 1 (__consumer_offsets-18) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 23:13:31,662] INFO [GroupCoordinator 1]: Assignment received from leader consumer-183384fe-6f07-4ef7-aa4e-ce74cc6f79fa-2-48c54a29-7e8f-41c3-b98d-f62cd6aca337 for group 183384fe-6f07-4ef7-aa4e-ce74cc6f79fa for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) policy-apex-pdp | Waiting for kafka port 9092... policy-apex-pdp | kafka (172.17.0.7:9092) open policy-apex-pdp | Waiting for pap port 6969... policy-apex-pdp | pap (172.17.0.10:6969) open policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' policy-apex-pdp | [2025-06-19T23:13:27.751+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] policy-apex-pdp | [2025-06-19T23:13:27.911+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-apex-pdp | allow.auto.create.topics = true policy-apex-pdp | auto.commit.interval.ms = 5000 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | auto.offset.reset = latest policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | check.crcs = true policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = consumer-183384fe-6f07-4ef7-aa4e-ce74cc6f79fa-1 policy-apex-pdp | client.rack = policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | default.api.timeout.ms = 60000 policy-apex-pdp | enable.auto.commit = true policy-apex-pdp | enable.metrics.push = true policy-apex-pdp | exclude.internal.topics = true policy-apex-pdp | fetch.max.bytes = 52428800 policy-apex-pdp | fetch.max.wait.ms = 500 policy-apex-pdp | fetch.min.bytes = 1 policy-apex-pdp | group.id = 183384fe-6f07-4ef7-aa4e-ce74cc6f79fa policy-apex-pdp | group.instance.id = null policy-apex-pdp | group.protocol = classic policy-apex-pdp | group.remote.assignor = null policy-apex-pdp | heartbeat.interval.ms = 3000 policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | internal.leave.group.on.close = true policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-apex-pdp | isolation.level = read_uncommitted policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | max.partition.fetch.bytes = 1048576 policy-apex-pdp | max.poll.interval.ms = 300000 policy-apex-pdp | max.poll.records = 500 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metadata.recovery.strategy = none policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-apex-pdp | receive.buffer.bytes = 65536 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retry.backoff.max.ms = 1000 policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.callback.handler.class = null policy-apex-pdp | sasl.login.class = null policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-apex-pdp | sasl.login.read.timeout.ms = null policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | sasl.mechanism = GSSAPI policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | sasl.oauthbearer.header.urlencode = false policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | security.protocol = PLAINTEXT policy-apex-pdp | security.providers = null policy-apex-pdp | send.buffer.bytes = 131072 policy-apex-pdp | session.timeout.ms = 45000 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | ssl.cipher.suites = null policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-apex-pdp | ssl.engine.factory.class = null policy-apex-pdp | ssl.key.password = null policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | ssl.keystore.certificate.chain = null policy-apex-pdp | ssl.keystore.key = null policy-apex-pdp | ssl.keystore.location = null policy-apex-pdp | ssl.keystore.password = null policy-apex-pdp | ssl.keystore.type = JKS policy-apex-pdp | ssl.protocol = TLSv1.3 policy-apex-pdp | ssl.provider = null policy-apex-pdp | ssl.secure.random.implementation = null policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | ssl.truststore.certificates = null policy-apex-pdp | ssl.truststore.location = null policy-apex-pdp | ssl.truststore.password = null policy-apex-pdp | ssl.truststore.type = JKS policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | policy-apex-pdp | [2025-06-19T23:13:27.953+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-apex-pdp | [2025-06-19T23:13:28.122+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-apex-pdp | [2025-06-19T23:13:28.122+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-apex-pdp | [2025-06-19T23:13:28.122+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750374808120 policy-apex-pdp | [2025-06-19T23:13:28.125+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-183384fe-6f07-4ef7-aa4e-ce74cc6f79fa-1, groupId=183384fe-6f07-4ef7-aa4e-ce74cc6f79fa] Subscribed to topic(s): policy-pdp-pap policy-apex-pdp | [2025-06-19T23:13:28.145+00:00|INFO|ServiceManager|main] service manager starting policy-apex-pdp | [2025-06-19T23:13:28.145+00:00|INFO|ServiceManager|main] service manager starting topics policy-apex-pdp | [2025-06-19T23:13:28.147+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=183384fe-6f07-4ef7-aa4e-ce74cc6f79fa, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting policy-apex-pdp | [2025-06-19T23:13:28.169+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-apex-pdp | allow.auto.create.topics = true policy-apex-pdp | auto.commit.interval.ms = 5000 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | auto.offset.reset = latest policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | check.crcs = true policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = consumer-183384fe-6f07-4ef7-aa4e-ce74cc6f79fa-2 policy-apex-pdp | client.rack = policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | default.api.timeout.ms = 60000 policy-apex-pdp | enable.auto.commit = true policy-apex-pdp | enable.metrics.push = true policy-apex-pdp | exclude.internal.topics = true policy-apex-pdp | fetch.max.bytes = 52428800 policy-apex-pdp | fetch.max.wait.ms = 500 policy-apex-pdp | fetch.min.bytes = 1 policy-apex-pdp | group.id = 183384fe-6f07-4ef7-aa4e-ce74cc6f79fa policy-apex-pdp | group.instance.id = null policy-apex-pdp | group.protocol = classic policy-apex-pdp | group.remote.assignor = null policy-apex-pdp | heartbeat.interval.ms = 3000 policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | internal.leave.group.on.close = true policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-apex-pdp | isolation.level = read_uncommitted policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | max.partition.fetch.bytes = 1048576 policy-apex-pdp | max.poll.interval.ms = 300000 policy-apex-pdp | max.poll.records = 500 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metadata.recovery.strategy = none policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-apex-pdp | receive.buffer.bytes = 65536 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retry.backoff.max.ms = 1000 policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.callback.handler.class = null policy-apex-pdp | sasl.login.class = null policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-apex-pdp | sasl.login.read.timeout.ms = null policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | sasl.mechanism = GSSAPI policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | sasl.oauthbearer.header.urlencode = false policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | security.protocol = PLAINTEXT policy-apex-pdp | security.providers = null policy-apex-pdp | send.buffer.bytes = 131072 policy-apex-pdp | session.timeout.ms = 45000 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | ssl.cipher.suites = null policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-apex-pdp | ssl.engine.factory.class = null policy-apex-pdp | ssl.key.password = null policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | ssl.keystore.certificate.chain = null policy-apex-pdp | ssl.keystore.key = null policy-apex-pdp | ssl.keystore.location = null policy-apex-pdp | ssl.keystore.password = null policy-apex-pdp | ssl.keystore.type = JKS policy-apex-pdp | ssl.protocol = TLSv1.3 policy-apex-pdp | ssl.provider = null policy-apex-pdp | ssl.secure.random.implementation = null policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | ssl.truststore.certificates = null policy-apex-pdp | ssl.truststore.location = null policy-apex-pdp | ssl.truststore.password = null policy-apex-pdp | ssl.truststore.type = JKS policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | policy-apex-pdp | [2025-06-19T23:13:28.169+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-apex-pdp | [2025-06-19T23:13:28.184+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-apex-pdp | [2025-06-19T23:13:28.184+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-apex-pdp | [2025-06-19T23:13:28.184+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750374808184 policy-apex-pdp | [2025-06-19T23:13:28.184+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-183384fe-6f07-4ef7-aa4e-ce74cc6f79fa-2, groupId=183384fe-6f07-4ef7-aa4e-ce74cc6f79fa] Subscribed to topic(s): policy-pdp-pap policy-apex-pdp | [2025-06-19T23:13:28.185+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=00863659-06e7-4966-8dff-f11c2b67ff37, alive=false, publisher=null]]: starting policy-apex-pdp | [2025-06-19T23:13:28.196+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-apex-pdp | acks = -1 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | batch.size = 16384 policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | buffer.memory = 33554432 policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = producer-1 policy-apex-pdp | compression.gzip.level = -1 policy-apex-pdp | compression.lz4.level = 9 policy-apex-pdp | compression.type = none policy-apex-pdp | compression.zstd.level = 3 policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | delivery.timeout.ms = 120000 policy-apex-pdp | enable.idempotence = true policy-apex-pdp | enable.metrics.push = true policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-apex-pdp | linger.ms = 0 policy-apex-pdp | max.block.ms = 60000 policy-apex-pdp | max.in.flight.requests.per.connection = 5 policy-apex-pdp | max.request.size = 1048576 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metadata.max.idle.ms = 300000 policy-apex-pdp | metadata.recovery.strategy = none policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partitioner.adaptive.partitioning.enable = true policy-apex-pdp | partitioner.availability.timeout.ms = 0 policy-apex-pdp | partitioner.class = null policy-apex-pdp | partitioner.ignore.keys = false policy-apex-pdp | receive.buffer.bytes = 32768 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retries = 2147483647 policy-apex-pdp | retry.backoff.max.ms = 1000 policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.callback.handler.class = null policy-apex-pdp | sasl.login.class = null policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-apex-pdp | sasl.login.read.timeout.ms = null policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | sasl.mechanism = GSSAPI policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | sasl.oauthbearer.header.urlencode = false policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | security.protocol = PLAINTEXT policy-apex-pdp | security.providers = null policy-apex-pdp | send.buffer.bytes = 131072 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | ssl.cipher.suites = null policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-apex-pdp | ssl.engine.factory.class = null policy-apex-pdp | ssl.key.password = null policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | ssl.keystore.certificate.chain = null policy-apex-pdp | ssl.keystore.key = null policy-apex-pdp | ssl.keystore.location = null policy-apex-pdp | ssl.keystore.password = null policy-apex-pdp | ssl.keystore.type = JKS policy-apex-pdp | ssl.protocol = TLSv1.3 policy-apex-pdp | ssl.provider = null policy-apex-pdp | ssl.secure.random.implementation = null policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | ssl.truststore.certificates = null policy-apex-pdp | ssl.truststore.location = null policy-apex-pdp | ssl.truststore.password = null policy-apex-pdp | ssl.truststore.type = JKS policy-apex-pdp | transaction.timeout.ms = 60000 policy-apex-pdp | transactional.id = null policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-apex-pdp | policy-apex-pdp | [2025-06-19T23:13:28.197+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-apex-pdp | [2025-06-19T23:13:28.209+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-apex-pdp | [2025-06-19T23:13:28.228+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-apex-pdp | [2025-06-19T23:13:28.228+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-apex-pdp | [2025-06-19T23:13:28.228+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750374808228 policy-apex-pdp | [2025-06-19T23:13:28.229+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=00863659-06e7-4966-8dff-f11c2b67ff37, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-apex-pdp | [2025-06-19T23:13:28.229+00:00|INFO|ServiceManager|main] service manager starting set alive policy-apex-pdp | [2025-06-19T23:13:28.229+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object policy-apex-pdp | [2025-06-19T23:13:28.230+00:00|INFO|ServiceManager|main] service manager starting topic sinks policy-apex-pdp | [2025-06-19T23:13:28.230+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher policy-apex-pdp | [2025-06-19T23:13:28.232+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener policy-apex-pdp | [2025-06-19T23:13:28.232+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher policy-apex-pdp | [2025-06-19T23:13:28.232+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher policy-apex-pdp | [2025-06-19T23:13:28.232+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=183384fe-6f07-4ef7-aa4e-ce74cc6f79fa, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@4c168660 policy-apex-pdp | [2025-06-19T23:13:28.232+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=183384fe-6f07-4ef7-aa4e-ce74cc6f79fa, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted policy-apex-pdp | [2025-06-19T23:13:28.233+00:00|INFO|ServiceManager|main] service manager starting Create REST server policy-apex-pdp | [2025-06-19T23:13:28.250+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: policy-apex-pdp | [] policy-apex-pdp | [2025-06-19T23:13:28.252+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"f3e48156-a6bf-4c88-b1e4-b2bdfbc1d416","timestampMs":1750374808233,"name":"apex-1b756ea5-3034-4556-91e7-cbcf68544e19","pdpGroup":"defaultGroup"} policy-apex-pdp | [2025-06-19T23:13:28.512+00:00|INFO|ServiceManager|main] service manager starting Rest Server policy-apex-pdp | [2025-06-19T23:13:28.512+00:00|INFO|ServiceManager|main] service manager starting policy-apex-pdp | [2025-06-19T23:13:28.512+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters policy-apex-pdp | [2025-06-19T23:13:28.512+00:00|INFO|JettyServletServer|main] JettyJerseyServer [JerseyServlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-4e7095ac==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@359c841e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64d7b720==org.glassfish.jersey.servlet.ServletContainer@2f1718b2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=oejs.Server@109f5dd8{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@415e0bcb{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@194152cf{STOPPED}}, connector=RestServerParameters@49d98dc5{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-4e7095ac==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@359c841e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64d7b720==org.glassfish.jersey.servlet.ServletContainer@2f1718b2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: STARTING policy-apex-pdp | [2025-06-19T23:13:28.525+00:00|INFO|ServiceManager|main] service manager started policy-apex-pdp | [2025-06-19T23:13:28.525+00:00|INFO|ServiceManager|main] service manager started policy-apex-pdp | [2025-06-19T23:13:28.525+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. policy-apex-pdp | [2025-06-19T23:13:28.525+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [JerseyServlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-4e7095ac==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@359c841e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64d7b720==org.glassfish.jersey.servlet.ServletContainer@2f1718b2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=oejs.Server@109f5dd8{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@415e0bcb{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@194152cf{STOPPED}}, connector=RestServerParameters@49d98dc5{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-4e7095ac==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@359c841e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64d7b720==org.glassfish.jersey.servlet.ServletContainer@2f1718b2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: RUN policy-apex-pdp | [2025-06-19T23:13:28.614+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-183384fe-6f07-4ef7-aa4e-ce74cc6f79fa-2, groupId=183384fe-6f07-4ef7-aa4e-ce74cc6f79fa] Cluster ID: CerIRx5NRkKJ8UCUyyU6pA policy-apex-pdp | [2025-06-19T23:13:28.614+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: CerIRx5NRkKJ8UCUyyU6pA policy-apex-pdp | [2025-06-19T23:13:28.616+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-183384fe-6f07-4ef7-aa4e-ce74cc6f79fa-2, groupId=183384fe-6f07-4ef7-aa4e-ce74cc6f79fa] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-apex-pdp | [2025-06-19T23:13:28.623+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-183384fe-6f07-4ef7-aa4e-ce74cc6f79fa-2, groupId=183384fe-6f07-4ef7-aa4e-ce74cc6f79fa] (Re-)joining group policy-apex-pdp | [2025-06-19T23:13:28.623+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 policy-apex-pdp | [2025-06-19T23:13:28.642+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-183384fe-6f07-4ef7-aa4e-ce74cc6f79fa-2, groupId=183384fe-6f07-4ef7-aa4e-ce74cc6f79fa] Request joining group due to: need to re-join with the given member-id: consumer-183384fe-6f07-4ef7-aa4e-ce74cc6f79fa-2-48c54a29-7e8f-41c3-b98d-f62cd6aca337 policy-apex-pdp | [2025-06-19T23:13:28.643+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-183384fe-6f07-4ef7-aa4e-ce74cc6f79fa-2, groupId=183384fe-6f07-4ef7-aa4e-ce74cc6f79fa] (Re-)joining group policy-apex-pdp | [2025-06-19T23:13:29.110+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls policy-apex-pdp | [2025-06-19T23:13:29.112+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls policy-apex-pdp | [2025-06-19T23:13:31.648+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-183384fe-6f07-4ef7-aa4e-ce74cc6f79fa-2, groupId=183384fe-6f07-4ef7-aa4e-ce74cc6f79fa] Successfully joined group with generation Generation{generationId=1, memberId='consumer-183384fe-6f07-4ef7-aa4e-ce74cc6f79fa-2-48c54a29-7e8f-41c3-b98d-f62cd6aca337', protocol='range'} policy-apex-pdp | [2025-06-19T23:13:31.655+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-183384fe-6f07-4ef7-aa4e-ce74cc6f79fa-2, groupId=183384fe-6f07-4ef7-aa4e-ce74cc6f79fa] Finished assignment for group at generation 1: {consumer-183384fe-6f07-4ef7-aa4e-ce74cc6f79fa-2-48c54a29-7e8f-41c3-b98d-f62cd6aca337=Assignment(partitions=[policy-pdp-pap-0])} policy-apex-pdp | [2025-06-19T23:13:31.665+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-183384fe-6f07-4ef7-aa4e-ce74cc6f79fa-2, groupId=183384fe-6f07-4ef7-aa4e-ce74cc6f79fa] Successfully synced group in generation Generation{generationId=1, memberId='consumer-183384fe-6f07-4ef7-aa4e-ce74cc6f79fa-2-48c54a29-7e8f-41c3-b98d-f62cd6aca337', protocol='range'} policy-apex-pdp | [2025-06-19T23:13:31.666+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-183384fe-6f07-4ef7-aa4e-ce74cc6f79fa-2, groupId=183384fe-6f07-4ef7-aa4e-ce74cc6f79fa] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-apex-pdp | [2025-06-19T23:13:31.668+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-183384fe-6f07-4ef7-aa4e-ce74cc6f79fa-2, groupId=183384fe-6f07-4ef7-aa4e-ce74cc6f79fa] Adding newly assigned partitions: policy-pdp-pap-0 policy-apex-pdp | [2025-06-19T23:13:31.674+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-183384fe-6f07-4ef7-aa4e-ce74cc6f79fa-2, groupId=183384fe-6f07-4ef7-aa4e-ce74cc6f79fa] Found no committed offset for partition policy-pdp-pap-0 policy-apex-pdp | [2025-06-19T23:13:31.685+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-183384fe-6f07-4ef7-aa4e-ce74cc6f79fa-2, groupId=183384fe-6f07-4ef7-aa4e-ce74cc6f79fa] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-apex-pdp | [2025-06-19T23:13:48.233+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"9c998ef2-f976-4d52-b973-6ae1119b8574","timestampMs":1750374828233,"name":"apex-1b756ea5-3034-4556-91e7-cbcf68544e19","pdpGroup":"defaultGroup"} policy-apex-pdp | [2025-06-19T23:13:48.256+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"9c998ef2-f976-4d52-b973-6ae1119b8574","timestampMs":1750374828233,"name":"apex-1b756ea5-3034-4556-91e7-cbcf68544e19","pdpGroup":"defaultGroup"} policy-apex-pdp | [2025-06-19T23:13:48.259+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2025-06-19T23:13:48.397+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"source":"pap-962859e8-b648-488a-a376-d0f11e9d4b11","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"39ce5ee3-d10f-46cd-9314-200f4f87d9bd","timestampMs":1750374828338,"name":"apex-1b756ea5-3034-4556-91e7-cbcf68544e19","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2025-06-19T23:13:48.421+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"8b6629c2-75d5-4ab2-a976-26e06c6c7269","timestampMs":1750374828421,"name":"apex-1b756ea5-3034-4556-91e7-cbcf68544e19","pdpGroup":"defaultGroup"} policy-apex-pdp | [2025-06-19T23:13:48.422+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher policy-apex-pdp | [2025-06-19T23:13:48.423+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"39ce5ee3-d10f-46cd-9314-200f4f87d9bd","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"67aff375-5ca8-4990-80d5-7c5519a1e210","timestampMs":1750374828423,"name":"apex-1b756ea5-3034-4556-91e7-cbcf68544e19","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2025-06-19T23:13:48.442+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"8b6629c2-75d5-4ab2-a976-26e06c6c7269","timestampMs":1750374828421,"name":"apex-1b756ea5-3034-4556-91e7-cbcf68544e19","pdpGroup":"defaultGroup"} policy-apex-pdp | [2025-06-19T23:13:48.442+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2025-06-19T23:13:48.449+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"39ce5ee3-d10f-46cd-9314-200f4f87d9bd","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"67aff375-5ca8-4990-80d5-7c5519a1e210","timestampMs":1750374828423,"name":"apex-1b756ea5-3034-4556-91e7-cbcf68544e19","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2025-06-19T23:13:48.449+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2025-06-19T23:13:48.508+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"source":"pap-962859e8-b648-488a-a376-d0f11e9d4b11","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"a7f59166-1371-41e3-986a-b174e5779032","timestampMs":1750374828339,"name":"apex-1b756ea5-3034-4556-91e7-cbcf68544e19","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2025-06-19T23:13:48.511+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"a7f59166-1371-41e3-986a-b174e5779032","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"6c7610ed-8cd8-438b-bda8-e616f0557a79","timestampMs":1750374828511,"name":"apex-1b756ea5-3034-4556-91e7-cbcf68544e19","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2025-06-19T23:13:48.521+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"a7f59166-1371-41e3-986a-b174e5779032","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"6c7610ed-8cd8-438b-bda8-e616f0557a79","timestampMs":1750374828511,"name":"apex-1b756ea5-3034-4556-91e7-cbcf68544e19","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2025-06-19T23:13:48.521+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2025-06-19T23:13:48.539+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"source":"pap-962859e8-b648-488a-a376-d0f11e9d4b11","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"82905cc9-0ce0-4032-b02a-5c15186e4845","timestampMs":1750374828518,"name":"apex-1b756ea5-3034-4556-91e7-cbcf68544e19","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2025-06-19T23:13:48.541+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"82905cc9-0ce0-4032-b02a-5c15186e4845","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"8a596af8-95bb-4ffe-892a-245276960def","timestampMs":1750374828540,"name":"apex-1b756ea5-3034-4556-91e7-cbcf68544e19","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2025-06-19T23:13:48.549+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"82905cc9-0ce0-4032-b02a-5c15186e4845","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"8a596af8-95bb-4ffe-892a-245276960def","timestampMs":1750374828540,"name":"apex-1b756ea5-3034-4556-91e7-cbcf68544e19","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2025-06-19T23:13:48.549+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2025-06-19T23:13:53.033+00:00|INFO|RequestLog|qtp1089680530-32] 172.17.0.1 - - [19/Jun/2025:23:13:52 +0000] "GET / HTTP/1.1" 401 423 "" "curl/7.58.0" policy-apex-pdp | [2025-06-19T23:13:56.119+00:00|INFO|RequestLog|qtp1089680530-27] 172.17.0.4 - policyadmin [19/Jun/2025:23:13:56 +0000] "GET /metrics HTTP/1.1" 200 2046 "" "Prometheus/3.4.1" policy-apex-pdp | [2025-06-19T23:14:13.077+00:00|INFO|RequestLog|qtp1089680530-29] 172.17.0.1 - policyadmin [19/Jun/2025:23:14:13 +0000] "GET /policy/apex-pdp/v1/healthcheck HTTP/1.1" 200 109 "" "curl/7.58.0" policy-apex-pdp | [2025-06-19T23:14:56.077+00:00|INFO|RequestLog|qtp1089680530-28] 172.17.0.4 - policyadmin [19/Jun/2025:23:14:56 +0000] "GET /metrics HTTP/1.1" 200 2052 "" "Prometheus/3.4.1" policy-apex-pdp | [2025-06-19T23:15:48.408+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","policies":[],"messageName":"PDP_STATUS","requestId":"6b4f1297-9901-4ef3-a620-605458caba61","timestampMs":1750374948408,"name":"apex-1b756ea5-3034-4556-91e7-cbcf68544e19","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2025-06-19T23:15:48.420+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","policies":[],"messageName":"PDP_STATUS","requestId":"6b4f1297-9901-4ef3-a620-605458caba61","timestampMs":1750374948408,"name":"apex-1b756ea5-3034-4556-91e7-cbcf68544e19","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2025-06-19T23:15:48.420+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2025-06-19T23:15:56.081+00:00|INFO|RequestLog|qtp1089680530-26] 172.17.0.4 - policyadmin [19/Jun/2025:23:15:56 +0000] "GET /metrics HTTP/1.1" 200 2063 "" "Prometheus/3.4.1" policy-api | Waiting for policy-db-migrator port 6824... policy-api | policy-db-migrator (172.17.0.8:6824) open policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml policy-api | policy-api | . ____ _ __ _ _ policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / policy-api | =========|_|==============|___/=/_/_/_/ policy-api | policy-api | :: Spring Boot :: (v3.4.6) policy-api | policy-api | [2025-06-19T23:13:06.155+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.2.Final policy-api | [2025-06-19T23:13:06.216+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.15 with PID 36 (/app/api.jar started by policy in /opt/app/policy/api/bin) policy-api | [2025-06-19T23:13:06.217+00:00|INFO|PolicyApiApplication|main] The following 1 profile is active: "default" policy-api | [2025-06-19T23:13:07.589+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-api | [2025-06-19T23:13:07.762+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 163 ms. Found 6 JPA repository interfaces. policy-api | [2025-06-19T23:13:08.397+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port 6969 (http) policy-api | [2025-06-19T23:13:08.416+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-api | [2025-06-19T23:13:08.418+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-api | [2025-06-19T23:13:08.418+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.41] policy-api | [2025-06-19T23:13:08.454+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext policy-api | [2025-06-19T23:13:08.455+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2179 ms policy-api | [2025-06-19T23:13:08.777+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-api | [2025-06-19T23:13:08.862+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.6.16.Final policy-api | [2025-06-19T23:13:08.910+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-api | [2025-06-19T23:13:09.290+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-api | [2025-06-19T23:13:09.329+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-api | [2025-06-19T23:13:09.519+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection@59aa1d1c policy-api | [2025-06-19T23:13:09.522+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-api | [2025-06-19T23:13:09.605+00:00|INFO|pooling|main] HHH10001005: Database info: policy-api | Database JDBC URL [Connecting through datasource 'HikariDataSource (HikariPool-1)'] policy-api | Database driver: undefined/unknown policy-api | Database version: 16.4 policy-api | Autocommit mode: undefined/unknown policy-api | Isolation level: undefined/unknown policy-api | Minimum pool size: undefined/unknown policy-api | Maximum pool size: undefined/unknown policy-api | [2025-06-19T23:13:11.500+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-api | [2025-06-19T23:13:11.506+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-api | [2025-06-19T23:13:12.128+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml policy-api | [2025-06-19T23:13:12.965+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] policy-api | [2025-06-19T23:13:14.032+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-api | [2025-06-19T23:13:14.074+00:00|INFO|InitializeUserDetailsBeanManagerConfigurer$InitializeUserDetailsManagerConfigurer|main] Global AuthenticationManager configured with UserDetailsService bean with name inMemoryUserDetailsManager policy-api | [2025-06-19T23:13:14.675+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoints beneath base path '' policy-api | [2025-06-19T23:13:14.806+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-api | [2025-06-19T23:13:14.833+00:00|INFO|TomcatWebServer|main] Tomcat started on port 6969 (http) with context path '/policy/api/v1' policy-api | [2025-06-19T23:13:14.859+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 9.346 seconds (process running for 9.951) policy-api | [2025-06-19T23:13:39.922+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-api | [2025-06-19T23:13:39.923+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' policy-api | [2025-06-19T23:13:39.924+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 1 ms policy-api | [2025-06-19T23:14:58.639+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-4] ***** OrderedServiceImpl implementers: policy-api | [] policy-csit | Invoking the robot tests from: pap-test.robot pap-slas.robot policy-csit | Run Robot test policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates policy-csit | -v POLICY_API_IP:policy-api:6969 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 policy-csit | -v APEX_IP:policy-apex-pdp:6969 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 policy-csit | -v KAFKA_IP:kafka:9092 policy-csit | -v PROMETHEUS_IP:prometheus:9090 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 policy-csit | -v POLICY_OPA_IP:policy-opa-pdp:8282 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 policy-csit | -v TEMP_FOLDER:/tmp/distribution policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 policy-csit | -v TEST_ENV:docker policy-csit | -v JAEGER_IP:jaeger:16686 policy-csit | Starting Robot test suites ... policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas.Pap-Test policy-csit | ============================================================================== policy-csit | LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | LoadNodeTemplates :: Create node templates in database using speci... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Healthcheck :: Verify policy pap health check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Consolidated Healthcheck :: Verify policy consolidated health check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Metrics :: Verify policy pap is exporting prometheus metrics | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | DeployPdpGroups :: Deploy policies in PdpGroups | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Pap-Test & Pap-Slas.Pap-Test | PASS | policy-csit | 22 tests, 22 passed, 0 failed policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas.Pap-Slas policy-csit | ============================================================================== policy-csit | WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Pap-Test & Pap-Slas.Pap-Slas | PASS | policy-csit | 8 tests, 8 passed, 0 failed policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas | PASS | policy-csit | 30 tests, 30 passed, 0 failed policy-csit | ============================================================================== policy-csit | Output: /tmp/results/output.xml policy-csit | Log: /tmp/results/log.html policy-csit | Report: /tmp/results/report.html policy-csit | RESULT: 0 policy-db-migrator | Waiting for postgres port 5432... policy-db-migrator | nc: connect to postgres (172.17.0.5) port 5432 (tcp) failed: Connection refused policy-db-migrator | nc: connect to postgres (172.17.0.5) port 5432 (tcp) failed: Connection refused policy-db-migrator | nc: connect to postgres (172.17.0.5) port 5432 (tcp) failed: Connection refused policy-db-migrator | Connection to postgres (172.17.0.5) 5432 port [tcp/postgresql] succeeded! policy-db-migrator | Initializing policyadmin... policy-db-migrator | 321 blocks policy-db-migrator | Preparing upgrade release version: 0800 policy-db-migrator | Preparing upgrade release version: 0900 policy-db-migrator | Preparing upgrade release version: 1000 policy-db-migrator | Preparing upgrade release version: 1100 policy-db-migrator | Preparing upgrade release version: 1200 policy-db-migrator | Preparing upgrade release version: 1300 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | -------------+--------- policy-db-migrator | policyadmin | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "policyadmin_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | upgrade: 0 -> 1300 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0450-pdpgroup.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0460-pdppolicystatus.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0470-pdp.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0480-pdpstatistics.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-pdpsubgroup.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0570-toscadatatype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0580-toscadatatypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-toscanodetemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0610-toscanodetemplates.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0630-toscanodetype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0640-toscanodetypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0660-toscaparameter.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0670-toscapolicies.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0690-toscapolicy.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-toscapolicytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0710-toscapolicytypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0730-toscaproperty.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0770-toscarequirement.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0780-toscarequirements.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0820-toscatrigger.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-pdp.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-pdpstatistics.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0150-pdpstatistics.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0210-sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0220-sequence.sql policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-toscatrigger.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0140-toscaparameter.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0150-toscaproperty.sql policy-db-migrator | DROP TABLE policy-db-migrator | DROP TABLE policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-upgrade.sql policy-db-migrator | msg policy-db-migrator | --------------------------- policy-db-migrator | upgrade to 1100 completed policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-db-migrator | DROP INDEX policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-audit_sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-statistics_sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-pdpstatistics.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-statistics_sequence.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | policyadmin: OK: upgrade (1300) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | NOTICE: relation "policyadmin_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | name | version policy-db-migrator | -------------+--------- policy-db-migrator | policyadmin | 1300 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | -----+---------------------------------------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-jpapdpgroup_properties.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:53.447629 policy-db-migrator | 2 | 0110-jpapdpstatistics_enginestats.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:53.498039 policy-db-migrator | 3 | 0120-jpapdpsubgroup_policies.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:53.549829 policy-db-migrator | 4 | 0130-jpapdpsubgroup_properties.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:53.627735 policy-db-migrator | 5 | 0140-jpapdpsubgroup_supportedpolicytypes.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:53.668247 policy-db-migrator | 6 | 0150-jpatoscacapabilityassignment_attributes.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:53.714204 policy-db-migrator | 7 | 0160-jpatoscacapabilityassignment_metadata.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:53.760261 policy-db-migrator | 8 | 0170-jpatoscacapabilityassignment_occurrences.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:53.811541 policy-db-migrator | 9 | 0180-jpatoscacapabilityassignment_properties.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:53.860665 policy-db-migrator | 10 | 0190-jpatoscacapabilitytype_metadata.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:53.906881 policy-db-migrator | 11 | 0200-jpatoscacapabilitytype_properties.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:53.956169 policy-db-migrator | 12 | 0210-jpatoscadatatype_constraints.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:54.002078 policy-db-migrator | 13 | 0220-jpatoscadatatype_metadata.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:54.074073 policy-db-migrator | 14 | 0230-jpatoscadatatype_properties.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:54.120606 policy-db-migrator | 15 | 0240-jpatoscanodetemplate_metadata.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:54.16903 policy-db-migrator | 16 | 0250-jpatoscanodetemplate_properties.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:54.213952 policy-db-migrator | 17 | 0260-jpatoscanodetype_metadata.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:54.256022 policy-db-migrator | 18 | 0270-jpatoscanodetype_properties.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:54.312781 policy-db-migrator | 19 | 0280-jpatoscapolicy_metadata.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:54.365226 policy-db-migrator | 20 | 0290-jpatoscapolicy_properties.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:54.417015 policy-db-migrator | 21 | 0300-jpatoscapolicy_targets.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:54.458516 policy-db-migrator | 22 | 0310-jpatoscapolicytype_metadata.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:54.510626 policy-db-migrator | 23 | 0320-jpatoscapolicytype_properties.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:54.573666 policy-db-migrator | 24 | 0330-jpatoscapolicytype_targets.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:54.612634 policy-db-migrator | 25 | 0340-jpatoscapolicytype_triggers.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:54.652079 policy-db-migrator | 26 | 0350-jpatoscaproperty_constraints.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:54.702177 policy-db-migrator | 27 | 0360-jpatoscaproperty_metadata.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:54.745231 policy-db-migrator | 28 | 0370-jpatoscarelationshiptype_metadata.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:54.811021 policy-db-migrator | 29 | 0380-jpatoscarelationshiptype_properties.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:54.8542 policy-db-migrator | 30 | 0390-jpatoscarequirement_metadata.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:54.898093 policy-db-migrator | 31 | 0400-jpatoscarequirement_occurrences.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:54.94241 policy-db-migrator | 32 | 0410-jpatoscarequirement_properties.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:54.989968 policy-db-migrator | 33 | 0420-jpatoscaservicetemplate_metadata.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:55.037831 policy-db-migrator | 34 | 0430-jpatoscatopologytemplate_inputs.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:55.092978 policy-db-migrator | 35 | 0440-pdpgroup_pdpsubgroup.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:55.14649 policy-db-migrator | 36 | 0450-pdpgroup.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:55.193426 policy-db-migrator | 37 | 0460-pdppolicystatus.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:55.263204 policy-db-migrator | 38 | 0470-pdp.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:55.31024 policy-db-migrator | 39 | 0480-pdpstatistics.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:55.360695 policy-db-migrator | 40 | 0490-pdpsubgroup_pdp.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:55.413342 policy-db-migrator | 41 | 0500-pdpsubgroup.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:55.462464 policy-db-migrator | 42 | 0510-toscacapabilityassignment.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:55.515794 policy-db-migrator | 43 | 0520-toscacapabilityassignments.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:55.565231 policy-db-migrator | 44 | 0530-toscacapabilityassignments_toscacapabilityassignment.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:55.615333 policy-db-migrator | 45 | 0540-toscacapabilitytype.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:55.66808 policy-db-migrator | 46 | 0550-toscacapabilitytypes.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:55.719559 policy-db-migrator | 47 | 0560-toscacapabilitytypes_toscacapabilitytype.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:55.769363 policy-db-migrator | 48 | 0570-toscadatatype.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:55.821575 policy-db-migrator | 49 | 0580-toscadatatypes.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:55.874401 policy-db-migrator | 50 | 0590-toscadatatypes_toscadatatype.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:55.930573 policy-db-migrator | 51 | 0600-toscanodetemplate.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:55.974901 policy-db-migrator | 52 | 0610-toscanodetemplates.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:56.01902 policy-db-migrator | 53 | 0620-toscanodetemplates_toscanodetemplate.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:56.065972 policy-db-migrator | 54 | 0630-toscanodetype.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:56.116486 policy-db-migrator | 55 | 0640-toscanodetypes.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:56.172305 policy-db-migrator | 56 | 0650-toscanodetypes_toscanodetype.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:56.222101 policy-db-migrator | 57 | 0660-toscaparameter.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:56.275313 policy-db-migrator | 58 | 0670-toscapolicies.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:56.327087 policy-db-migrator | 59 | 0680-toscapolicies_toscapolicy.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:56.379993 policy-db-migrator | 60 | 0690-toscapolicy.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:56.435036 policy-db-migrator | 61 | 0700-toscapolicytype.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:56.489387 policy-db-migrator | 62 | 0710-toscapolicytypes.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:56.540122 policy-db-migrator | 63 | 0720-toscapolicytypes_toscapolicytype.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:56.58878 policy-db-migrator | 64 | 0730-toscaproperty.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:56.642445 policy-db-migrator | 65 | 0740-toscarelationshiptype.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:56.695864 policy-db-migrator | 66 | 0750-toscarelationshiptypes.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:56.74476 policy-db-migrator | 67 | 0760-toscarelationshiptypes_toscarelationshiptype.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:56.79735 policy-db-migrator | 68 | 0770-toscarequirement.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:56.851607 policy-db-migrator | 69 | 0780-toscarequirements.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:56.900358 policy-db-migrator | 70 | 0790-toscarequirements_toscarequirement.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:56.951951 policy-db-migrator | 71 | 0800-toscaservicetemplate.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:57.000913 policy-db-migrator | 72 | 0810-toscatopologytemplate.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:57.054948 policy-db-migrator | 73 | 0820-toscatrigger.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:57.107509 policy-db-migrator | 74 | 0830-FK_ToscaNodeTemplate_capabilitiesName.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:57.155349 policy-db-migrator | 75 | 0840-FK_ToscaNodeTemplate_requirementsName.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:57.203567 policy-db-migrator | 76 | 0850-FK_ToscaNodeType_requirementsName.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:57.253242 policy-db-migrator | 77 | 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:57.305456 policy-db-migrator | 78 | 0870-FK_ToscaServiceTemplate_dataTypesName.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:57.355233 policy-db-migrator | 79 | 0880-FK_ToscaServiceTemplate_nodeTypesName.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:57.402556 policy-db-migrator | 80 | 0890-FK_ToscaServiceTemplate_policyTypesName.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:57.449477 policy-db-migrator | 81 | 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:57.495118 policy-db-migrator | 82 | 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:57.549338 policy-db-migrator | 83 | 0920-FK_ToscaTopologyTemplate_policyName.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:57.598569 policy-db-migrator | 84 | 0940-PdpPolicyStatus_PdpGroup.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:57.65136 policy-db-migrator | 85 | 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:57.696144 policy-db-migrator | 86 | 0960-FK_ToscaNodeTemplate_capabilitiesName.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:57.745738 policy-db-migrator | 87 | 0970-FK_ToscaNodeTemplate_requirementsName.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:57.812375 policy-db-migrator | 88 | 0980-FK_ToscaNodeType_requirementsName.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:57.859769 policy-db-migrator | 89 | 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:57.905281 policy-db-migrator | 90 | 1000-FK_ToscaServiceTemplate_dataTypesName.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:57.957275 policy-db-migrator | 91 | 1010-FK_ToscaServiceTemplate_nodeTypesName.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:58.030238 policy-db-migrator | 92 | 1020-FK_ToscaServiceTemplate_policyTypesName.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:58.073775 policy-db-migrator | 93 | 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:58.122739 policy-db-migrator | 94 | 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:58.172376 policy-db-migrator | 95 | 1050-FK_ToscaTopologyTemplate_policyName.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:58.223601 policy-db-migrator | 96 | 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql | upgrade | 0 | 0800 | 1906252312530800u | 1 | 2025-06-19 23:12:58.277814 policy-db-migrator | 97 | 0100-pdp.sql | upgrade | 0800 | 0900 | 1906252312530900u | 1 | 2025-06-19 23:12:58.324398 policy-db-migrator | 98 | 0110-idx_tsidx1.sql | upgrade | 0800 | 0900 | 1906252312530900u | 1 | 2025-06-19 23:12:58.376154 policy-db-migrator | 99 | 0120-pk_pdpstatistics.sql | upgrade | 0800 | 0900 | 1906252312530900u | 1 | 2025-06-19 23:12:58.427727 policy-db-migrator | 100 | 0130-pdpstatistics.sql | upgrade | 0800 | 0900 | 1906252312530900u | 1 | 2025-06-19 23:12:58.47599 policy-db-migrator | 101 | 0140-pk_pdpstatistics.sql | upgrade | 0800 | 0900 | 1906252312530900u | 1 | 2025-06-19 23:12:58.542946 policy-db-migrator | 102 | 0150-pdpstatistics.sql | upgrade | 0800 | 0900 | 1906252312530900u | 1 | 2025-06-19 23:12:58.59203 policy-db-migrator | 103 | 0160-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1906252312530900u | 1 | 2025-06-19 23:12:58.642603 policy-db-migrator | 104 | 0170-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1906252312530900u | 1 | 2025-06-19 23:12:58.689096 policy-db-migrator | 105 | 0180-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1906252312530900u | 1 | 2025-06-19 23:12:58.739836 policy-db-migrator | 106 | 0190-jpapolicyaudit.sql | upgrade | 0800 | 0900 | 1906252312530900u | 1 | 2025-06-19 23:12:58.791935 policy-db-migrator | 107 | 0200-JpaPolicyAuditIndex_timestamp.sql | upgrade | 0800 | 0900 | 1906252312530900u | 1 | 2025-06-19 23:12:58.841938 policy-db-migrator | 108 | 0210-sequence.sql | upgrade | 0800 | 0900 | 1906252312530900u | 1 | 2025-06-19 23:12:58.896268 policy-db-migrator | 109 | 0220-sequence.sql | upgrade | 0800 | 0900 | 1906252312530900u | 1 | 2025-06-19 23:12:58.941387 policy-db-migrator | 110 | 0100-jpatoscapolicy_targets.sql | upgrade | 0900 | 1000 | 1906252312531000u | 1 | 2025-06-19 23:12:59.01247 policy-db-migrator | 111 | 0110-jpatoscapolicytype_targets.sql | upgrade | 0900 | 1000 | 1906252312531000u | 1 | 2025-06-19 23:12:59.059 policy-db-migrator | 112 | 0120-toscatrigger.sql | upgrade | 0900 | 1000 | 1906252312531000u | 1 | 2025-06-19 23:12:59.10617 policy-db-migrator | 113 | 0130-jpatoscapolicytype_triggers.sql | upgrade | 0900 | 1000 | 1906252312531000u | 1 | 2025-06-19 23:12:59.160213 policy-db-migrator | 114 | 0140-toscaparameter.sql | upgrade | 0900 | 1000 | 1906252312531000u | 1 | 2025-06-19 23:12:59.254891 policy-db-migrator | 115 | 0150-toscaproperty.sql | upgrade | 0900 | 1000 | 1906252312531000u | 1 | 2025-06-19 23:12:59.301218 policy-db-migrator | 116 | 0160-jpapolicyaudit_pk.sql | upgrade | 0900 | 1000 | 1906252312531000u | 1 | 2025-06-19 23:12:59.355108 policy-db-migrator | 117 | 0170-pdpstatistics_pk.sql | upgrade | 0900 | 1000 | 1906252312531000u | 1 | 2025-06-19 23:12:59.401671 policy-db-migrator | 118 | 0180-jpatoscanodetemplate_metadata.sql | upgrade | 0900 | 1000 | 1906252312531000u | 1 | 2025-06-19 23:12:59.446465 policy-db-migrator | 119 | 0100-upgrade.sql | upgrade | 1000 | 1100 | 1906252312531100u | 1 | 2025-06-19 23:12:59.533675 policy-db-migrator | 120 | 0100-jpapolicyaudit_renameuser.sql | upgrade | 1100 | 1200 | 1906252312531200u | 1 | 2025-06-19 23:12:59.583368 policy-db-migrator | 121 | 0110-idx_tsidx1.sql | upgrade | 1100 | 1200 | 1906252312531200u | 1 | 2025-06-19 23:12:59.634661 policy-db-migrator | 122 | 0120-audit_sequence.sql | upgrade | 1100 | 1200 | 1906252312531200u | 1 | 2025-06-19 23:12:59.684181 policy-db-migrator | 123 | 0130-statistics_sequence.sql | upgrade | 1100 | 1200 | 1906252312531200u | 1 | 2025-06-19 23:12:59.752863 policy-db-migrator | 124 | 0100-pdpstatistics.sql | upgrade | 1200 | 1300 | 1906252312531300u | 1 | 2025-06-19 23:12:59.800532 policy-db-migrator | 125 | 0110-jpapdpstatistics_enginestats.sql | upgrade | 1200 | 1300 | 1906252312531300u | 1 | 2025-06-19 23:12:59.847905 policy-db-migrator | 126 | 0120-statistics_sequence.sql | upgrade | 1200 | 1300 | 1906252312531300u | 1 | 2025-06-19 23:12:59.888453 policy-db-migrator | (126 rows) policy-db-migrator | policy-db-migrator | policyadmin: OK @ 1300 policy-db-migrator | Initializing clampacm... policy-db-migrator | 97 blocks policy-db-migrator | Preparing upgrade release version: 1400 policy-db-migrator | Preparing upgrade release version: 1500 policy-db-migrator | Preparing upgrade release version: 1600 policy-db-migrator | Preparing upgrade release version: 1601 policy-db-migrator | Preparing upgrade release version: 1700 policy-db-migrator | Preparing upgrade release version: 1701 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | ----------+--------- policy-db-migrator | clampacm | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | clampacm: upgrade available: 0 -> 1701 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | NOTICE: relation "clampacm_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | upgrade: 0 -> 1701 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositiondefinition.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-automationcompositionelement.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-nodetemplatestate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-participant.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-participantsupportedelements.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-ac_compositionId_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-ac_element_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0900-dt_element_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1000-supported_element_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1100-automationcompositionelement_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1200-nodetemplate_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1300-participantsupportedelements_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | ALTER TABLE policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositiondefinition.sql policy-db-migrator | ALTER TABLE policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-participantreplica.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-participant.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-participant_replica_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-participant_replica_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-automationcompositionelement.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-nodetemplatestate.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositionelement.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositionelement.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-message.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-messagejob.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-messagejob_identificationId_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcompositionrollback.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcomposition.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-automationcompositionelement.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-automationcomposition_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-automationcompositiondefinition.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-nodetemplatestate.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-mb_identificationId_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-participantreplica.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0900-participantsupportedacelements.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | clampacm: OK: upgrade (1701) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | NOTICE: relation "clampacm_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | name | version policy-db-migrator | ----------+--------- policy-db-migrator | clampacm | 1701 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------------------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-automationcomposition.sql | upgrade | 1300 | 1400 | 1906252313001400u | 1 | 2025-06-19 23:13:00.515202 policy-db-migrator | 2 | 0200-automationcompositiondefinition.sql | upgrade | 1300 | 1400 | 1906252313001400u | 1 | 2025-06-19 23:13:00.572525 policy-db-migrator | 3 | 0300-automationcompositionelement.sql | upgrade | 1300 | 1400 | 1906252313001400u | 1 | 2025-06-19 23:13:00.628426 policy-db-migrator | 4 | 0400-nodetemplatestate.sql | upgrade | 1300 | 1400 | 1906252313001400u | 1 | 2025-06-19 23:13:00.691355 policy-db-migrator | 5 | 0500-participant.sql | upgrade | 1300 | 1400 | 1906252313001400u | 1 | 2025-06-19 23:13:00.742908 policy-db-migrator | 6 | 0600-participantsupportedelements.sql | upgrade | 1300 | 1400 | 1906252313001400u | 1 | 2025-06-19 23:13:00.798842 policy-db-migrator | 7 | 0700-ac_compositionId_index.sql | upgrade | 1300 | 1400 | 1906252313001400u | 1 | 2025-06-19 23:13:00.851346 policy-db-migrator | 8 | 0800-ac_element_fk_index.sql | upgrade | 1300 | 1400 | 1906252313001400u | 1 | 2025-06-19 23:13:00.9148 policy-db-migrator | 9 | 0900-dt_element_fk_index.sql | upgrade | 1300 | 1400 | 1906252313001400u | 1 | 2025-06-19 23:13:00.963427 policy-db-migrator | 10 | 1000-supported_element_fk_index.sql | upgrade | 1300 | 1400 | 1906252313001400u | 1 | 2025-06-19 23:13:01.014438 policy-db-migrator | 11 | 1100-automationcompositionelement_fk.sql | upgrade | 1300 | 1400 | 1906252313001400u | 1 | 2025-06-19 23:13:01.066665 policy-db-migrator | 12 | 1200-nodetemplate_fk.sql | upgrade | 1300 | 1400 | 1906252313001400u | 1 | 2025-06-19 23:13:01.113191 policy-db-migrator | 13 | 1300-participantsupportedelements_fk.sql | upgrade | 1300 | 1400 | 1906252313001400u | 1 | 2025-06-19 23:13:01.164829 policy-db-migrator | 14 | 0100-automationcomposition.sql | upgrade | 1400 | 1500 | 1906252313001500u | 1 | 2025-06-19 23:13:01.216326 policy-db-migrator | 15 | 0200-automationcompositiondefinition.sql | upgrade | 1400 | 1500 | 1906252313001500u | 1 | 2025-06-19 23:13:01.26768 policy-db-migrator | 16 | 0300-participantreplica.sql | upgrade | 1400 | 1500 | 1906252313001500u | 1 | 2025-06-19 23:13:01.324236 policy-db-migrator | 17 | 0400-participant.sql | upgrade | 1400 | 1500 | 1906252313001500u | 1 | 2025-06-19 23:13:01.376286 policy-db-migrator | 18 | 0500-participant_replica_fk_index.sql | upgrade | 1400 | 1500 | 1906252313001500u | 1 | 2025-06-19 23:13:01.427359 policy-db-migrator | 19 | 0600-participant_replica_fk.sql | upgrade | 1400 | 1500 | 1906252313001500u | 1 | 2025-06-19 23:13:01.475308 policy-db-migrator | 20 | 0700-automationcompositionelement.sql | upgrade | 1400 | 1500 | 1906252313001500u | 1 | 2025-06-19 23:13:01.513182 policy-db-migrator | 21 | 0800-nodetemplatestate.sql | upgrade | 1400 | 1500 | 1906252313001500u | 1 | 2025-06-19 23:13:01.560379 policy-db-migrator | 22 | 0100-automationcomposition.sql | upgrade | 1500 | 1600 | 1906252313001600u | 1 | 2025-06-19 23:13:01.601367 policy-db-migrator | 23 | 0200-automationcompositionelement.sql | upgrade | 1500 | 1600 | 1906252313001600u | 1 | 2025-06-19 23:13:01.645566 policy-db-migrator | 24 | 0100-automationcomposition.sql | upgrade | 1501 | 1601 | 1906252313001601u | 1 | 2025-06-19 23:13:01.694579 policy-db-migrator | 25 | 0200-automationcompositionelement.sql | upgrade | 1501 | 1601 | 1906252313001601u | 1 | 2025-06-19 23:13:01.744164 policy-db-migrator | 26 | 0100-message.sql | upgrade | 1600 | 1700 | 1906252313001700u | 1 | 2025-06-19 23:13:01.811359 policy-db-migrator | 27 | 0200-messagejob.sql | upgrade | 1600 | 1700 | 1906252313001700u | 1 | 2025-06-19 23:13:01.867552 policy-db-migrator | 28 | 0300-messagejob_identificationId_index.sql | upgrade | 1600 | 1700 | 1906252313001700u | 1 | 2025-06-19 23:13:01.92392 policy-db-migrator | 29 | 0100-automationcompositionrollback.sql | upgrade | 1601 | 1701 | 1906252313001701u | 1 | 2025-06-19 23:13:01.980794 policy-db-migrator | 30 | 0200-automationcomposition.sql | upgrade | 1601 | 1701 | 1906252313001701u | 1 | 2025-06-19 23:13:02.047248 policy-db-migrator | 31 | 0300-automationcompositionelement.sql | upgrade | 1601 | 1701 | 1906252313001701u | 1 | 2025-06-19 23:13:02.096055 policy-db-migrator | 32 | 0400-automationcomposition_fk.sql | upgrade | 1601 | 1701 | 1906252313001701u | 1 | 2025-06-19 23:13:02.146532 policy-db-migrator | 33 | 0500-automationcompositiondefinition.sql | upgrade | 1601 | 1701 | 1906252313001701u | 1 | 2025-06-19 23:13:02.194443 policy-db-migrator | 34 | 0600-nodetemplatestate.sql | upgrade | 1601 | 1701 | 1906252313001701u | 1 | 2025-06-19 23:13:02.24952 policy-db-migrator | 35 | 0700-mb_identificationId_index.sql | upgrade | 1601 | 1701 | 1906252313001701u | 1 | 2025-06-19 23:13:02.316117 policy-db-migrator | 36 | 0800-participantreplica.sql | upgrade | 1601 | 1701 | 1906252313001701u | 1 | 2025-06-19 23:13:02.369385 policy-db-migrator | 37 | 0900-participantsupportedacelements.sql | upgrade | 1601 | 1701 | 1906252313001701u | 1 | 2025-06-19 23:13:02.420496 policy-db-migrator | (37 rows) policy-db-migrator | policy-db-migrator | clampacm: OK @ 1701 policy-db-migrator | Initializing pooling... policy-db-migrator | 4 blocks policy-db-migrator | Preparing upgrade release version: 1600 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | ---------+--------- policy-db-migrator | pooling | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | pooling: upgrade available: 0 -> 1600 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | NOTICE: relation "pooling_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | upgrade: 0 -> 1600 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-distributed.locking.sql policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE INDEX policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | pooling: OK: upgrade (1600) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "pooling_schema_changelog" already exists, skipping policy-db-migrator | name | version policy-db-migrator | ---------+--------- policy-db-migrator | pooling | 1600 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-distributed.locking.sql | upgrade | 1500 | 1600 | 1906252313031600u | 1 | 2025-06-19 23:13:03.067369 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | pooling: OK @ 1600 policy-db-migrator | Initializing operationshistory... policy-db-migrator | 6 blocks policy-db-migrator | Preparing upgrade release version: 1600 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | -------------------+--------- policy-db-migrator | operationshistory | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | operationshistory: upgrade available: 0 -> 1600 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "operationshistory_schema_changelog" already exists, skipping policy-db-migrator | upgrade: 0 -> 1600 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-ophistory_id_sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-operationshistory.sql policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE INDEX policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | operationshistory: OK: upgrade (1600) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "operationshistory_schema_changelog" already exists, skipping policy-db-migrator | name | version policy-db-migrator | -------------------+--------- policy-db-migrator | operationshistory | 1600 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-ophistory_id_sequence.sql | upgrade | 1500 | 1600 | 1906252313031600u | 1 | 2025-06-19 23:13:03.675806 policy-db-migrator | 2 | 0110-operationshistory.sql | upgrade | 1500 | 1600 | 1906252313031600u | 1 | 2025-06-19 23:13:03.735669 policy-db-migrator | (2 rows) policy-db-migrator | policy-db-migrator | operationshistory: OK @ 1600 policy-pap | Waiting for api port 6969... policy-pap | api (172.17.0.9:6969) open policy-pap | Waiting for kafka port 9092... policy-pap | kafka (172.17.0.7:9092) open policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json policy-pap | policy-pap | . ____ _ __ _ _ policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / policy-pap | =========|_|==============|___/=/_/_/_/ policy-pap | policy-pap | :: Spring Boot :: (v3.4.6) policy-pap | policy-pap | [2025-06-19T23:13:16.841+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.15 with PID 56 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) policy-pap | [2025-06-19T23:13:16.842+00:00|INFO|PolicyPapApplication|main] The following 1 profile is active: "default" policy-pap | [2025-06-19T23:13:18.226+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-pap | [2025-06-19T23:13:18.304+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 67 ms. Found 7 JPA repository interfaces. policy-pap | [2025-06-19T23:13:19.255+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port 6969 (http) policy-pap | [2025-06-19T23:13:19.268+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-pap | [2025-06-19T23:13:19.270+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-pap | [2025-06-19T23:13:19.270+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.41] policy-pap | [2025-06-19T23:13:19.323+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext policy-pap | [2025-06-19T23:13:19.323+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2424 ms policy-pap | [2025-06-19T23:13:19.799+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-pap | [2025-06-19T23:13:19.891+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.6.16.Final policy-pap | [2025-06-19T23:13:19.952+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-pap | [2025-06-19T23:13:20.419+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-pap | [2025-06-19T23:13:20.479+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-pap | [2025-06-19T23:13:20.740+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection@126d8659 policy-pap | [2025-06-19T23:13:20.743+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-pap | [2025-06-19T23:13:20.848+00:00|INFO|pooling|main] HHH10001005: Database info: policy-pap | Database JDBC URL [Connecting through datasource 'HikariDataSource (HikariPool-1)'] policy-pap | Database driver: undefined/unknown policy-pap | Database version: 16.4 policy-pap | Autocommit mode: undefined/unknown policy-pap | Isolation level: undefined/unknown policy-pap | Minimum pool size: undefined/unknown policy-pap | Maximum pool size: undefined/unknown policy-pap | [2025-06-19T23:13:22.861+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-pap | [2025-06-19T23:13:22.865+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-pap | [2025-06-19T23:13:24.091+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-02e21be1-e894-44c3-898e-596b23538497-1 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = 02e21be1-e894-44c3-898e-596b23538497 policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-19T23:13:24.145+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-19T23:13:24.298+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-19T23:13:24.298+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-19T23:13:24.298+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750374804296 policy-pap | [2025-06-19T23:13:24.300+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-02e21be1-e894-44c3-898e-596b23538497-1, groupId=02e21be1-e894-44c3-898e-596b23538497] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-19T23:13:24.301+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-policy-pap-2 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = policy-pap policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-19T23:13:24.301+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-19T23:13:24.309+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-19T23:13:24.309+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-19T23:13:24.309+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750374804309 policy-pap | [2025-06-19T23:13:24.310+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-19T23:13:24.635+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json policy-pap | [2025-06-19T23:13:24.752+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-pap | [2025-06-19T23:13:24.830+00:00|INFO|InitializeUserDetailsBeanManagerConfigurer$InitializeUserDetailsManagerConfigurer|main] Global AuthenticationManager configured with UserDetailsService bean with name inMemoryUserDetailsManager policy-pap | [2025-06-19T23:13:25.063+00:00|INFO|OptionalValidatorFactoryBean|main] Failed to set up a Bean Validation provider: jakarta.validation.NoProviderFoundException: Unable to create a Configuration, because no Jakarta Validation provider could be found. Add a provider like Hibernate Validator (RI) to your classpath. policy-pap | [2025-06-19T23:13:25.808+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoints beneath base path '' policy-pap | [2025-06-19T23:13:25.927+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-pap | [2025-06-19T23:13:25.951+00:00|INFO|TomcatWebServer|main] Tomcat started on port 6969 (http) with context path '/policy/pap/v1' policy-pap | [2025-06-19T23:13:25.970+00:00|INFO|ServiceManager|main] Policy PAP starting policy-pap | [2025-06-19T23:13:25.971+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry policy-pap | [2025-06-19T23:13:25.971+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters policy-pap | [2025-06-19T23:13:25.972+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener policy-pap | [2025-06-19T23:13:25.972+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher policy-pap | [2025-06-19T23:13:25.972+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher policy-pap | [2025-06-19T23:13:25.972+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher policy-pap | [2025-06-19T23:13:25.974+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=02e21be1-e894-44c3-898e-596b23538497, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@3186b07d policy-pap | [2025-06-19T23:13:25.984+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=02e21be1-e894-44c3-898e-596b23538497, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-19T23:13:25.984+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-02e21be1-e894-44c3-898e-596b23538497-3 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = 02e21be1-e894-44c3-898e-596b23538497 policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-19T23:13:25.985+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-19T23:13:25.991+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-19T23:13:25.991+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-19T23:13:25.991+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750374805991 policy-pap | [2025-06-19T23:13:25.991+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-02e21be1-e894-44c3-898e-596b23538497-3, groupId=02e21be1-e894-44c3-898e-596b23538497] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-19T23:13:25.992+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher policy-pap | [2025-06-19T23:13:25.992+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=32e2619a-6c9a-46cc-9da4-ae57add6f626, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@5f45d7db policy-pap | [2025-06-19T23:13:25.992+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=32e2619a-6c9a-46cc-9da4-ae57add6f626, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-19T23:13:25.992+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-policy-pap-4 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = policy-pap policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-19T23:13:25.992+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-19T23:13:25.997+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-19T23:13:25.998+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-19T23:13:25.998+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750374805997 policy-pap | [2025-06-19T23:13:25.998+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-19T23:13:25.998+00:00|INFO|ServiceManager|main] Policy PAP starting topics policy-pap | [2025-06-19T23:13:25.998+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=32e2619a-6c9a-46cc-9da4-ae57add6f626, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-19T23:13:25.998+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=02e21be1-e894-44c3-898e-596b23538497, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-19T23:13:25.998+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=76371ba7-b20b-406a-9b08-b8b8316d5a3b, alive=false, publisher=null]]: starting policy-pap | [2025-06-19T23:13:26.010+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-pap | acks = -1 policy-pap | auto.include.jmx.reporter = true policy-pap | batch.size = 16384 policy-pap | bootstrap.servers = [kafka:9092] policy-pap | buffer.memory = 33554432 policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = producer-1 policy-pap | compression.gzip.level = -1 policy-pap | compression.lz4.level = 9 policy-pap | compression.type = none policy-pap | compression.zstd.level = 3 policy-pap | connections.max.idle.ms = 540000 policy-pap | delivery.timeout.ms = 120000 policy-pap | enable.idempotence = true policy-pap | enable.metrics.push = true policy-pap | interceptor.classes = [] policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | linger.ms = 0 policy-pap | max.block.ms = 60000 policy-pap | max.in.flight.requests.per.connection = 5 policy-pap | max.request.size = 1048576 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.max.idle.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partitioner.adaptive.partitioning.enable = true policy-pap | partitioner.availability.timeout.ms = 0 policy-pap | partitioner.class = null policy-pap | partitioner.ignore.keys = false policy-pap | receive.buffer.bytes = 32768 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retries = 2147483647 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | transaction.timeout.ms = 60000 policy-pap | transactional.id = null policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | policy-pap | [2025-06-19T23:13:26.010+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-19T23:13:26.023+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-pap | [2025-06-19T23:13:26.038+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-19T23:13:26.038+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-19T23:13:26.038+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750374806037 policy-pap | [2025-06-19T23:13:26.038+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=76371ba7-b20b-406a-9b08-b8b8316d5a3b, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-pap | [2025-06-19T23:13:26.038+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=3da009f9-9b78-4c0e-9bdd-0d22a3790246, alive=false, publisher=null]]: starting policy-pap | [2025-06-19T23:13:26.039+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-pap | acks = -1 policy-pap | auto.include.jmx.reporter = true policy-pap | batch.size = 16384 policy-pap | bootstrap.servers = [kafka:9092] policy-pap | buffer.memory = 33554432 policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = producer-2 policy-pap | compression.gzip.level = -1 policy-pap | compression.lz4.level = 9 policy-pap | compression.type = none policy-pap | compression.zstd.level = 3 policy-pap | connections.max.idle.ms = 540000 policy-pap | delivery.timeout.ms = 120000 policy-pap | enable.idempotence = true policy-pap | enable.metrics.push = true policy-pap | interceptor.classes = [] policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | linger.ms = 0 policy-pap | max.block.ms = 60000 policy-pap | max.in.flight.requests.per.connection = 5 policy-pap | max.request.size = 1048576 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.max.idle.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partitioner.adaptive.partitioning.enable = true policy-pap | partitioner.availability.timeout.ms = 0 policy-pap | partitioner.class = null policy-pap | partitioner.ignore.keys = false policy-pap | receive.buffer.bytes = 32768 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retries = 2147483647 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | transaction.timeout.ms = 60000 policy-pap | transactional.id = null policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | policy-pap | [2025-06-19T23:13:26.039+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-19T23:13:26.039+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. policy-pap | [2025-06-19T23:13:26.044+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-19T23:13:26.044+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-19T23:13:26.044+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750374806044 policy-pap | [2025-06-19T23:13:26.044+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=3da009f9-9b78-4c0e-9bdd-0d22a3790246, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-pap | [2025-06-19T23:13:26.044+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator policy-pap | [2025-06-19T23:13:26.044+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher policy-pap | [2025-06-19T23:13:26.046+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher policy-pap | [2025-06-19T23:13:26.047+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers policy-pap | [2025-06-19T23:13:26.051+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers policy-pap | [2025-06-19T23:13:26.051+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock policy-pap | [2025-06-19T23:13:26.051+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests policy-pap | [2025-06-19T23:13:26.051+00:00|INFO|TimerManager|Thread-9] timer manager update started policy-pap | [2025-06-19T23:13:26.052+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer policy-pap | [2025-06-19T23:13:26.053+00:00|INFO|TimerManager|Thread-10] timer manager state-change started policy-pap | [2025-06-19T23:13:26.054+00:00|INFO|ServiceManager|main] Policy PAP started policy-pap | [2025-06-19T23:13:26.054+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 9.968 seconds (process running for 10.516) policy-pap | [2025-06-19T23:13:26.444+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 3 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-pap | [2025-06-19T23:13:26.446+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: CerIRx5NRkKJ8UCUyyU6pA policy-pap | [2025-06-19T23:13:26.447+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: CerIRx5NRkKJ8UCUyyU6pA policy-pap | [2025-06-19T23:13:26.448+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: CerIRx5NRkKJ8UCUyyU6pA policy-pap | [2025-06-19T23:13:26.478+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 0 with epoch 0 policy-pap | [2025-06-19T23:13:26.479+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 1 with epoch 0 policy-pap | [2025-06-19T23:13:26.504+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-02e21be1-e894-44c3-898e-596b23538497-3, groupId=02e21be1-e894-44c3-898e-596b23538497] The metadata response from the cluster reported a recoverable issue with correlation id 3 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-19T23:13:26.504+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-02e21be1-e894-44c3-898e-596b23538497-3, groupId=02e21be1-e894-44c3-898e-596b23538497] Cluster ID: CerIRx5NRkKJ8UCUyyU6pA policy-pap | [2025-06-19T23:13:26.622+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-pap | [2025-06-19T23:13:26.636+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-02e21be1-e894-44c3-898e-596b23538497-3, groupId=02e21be1-e894-44c3-898e-596b23538497] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-19T23:13:26.832+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-02e21be1-e894-44c3-898e-596b23538497-3, groupId=02e21be1-e894-44c3-898e-596b23538497] The metadata response from the cluster reported a recoverable issue with correlation id 9 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-19T23:13:26.866+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 9 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-19T23:13:27.277+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-02e21be1-e894-44c3-898e-596b23538497-3, groupId=02e21be1-e894-44c3-898e-596b23538497] The metadata response from the cluster reported a recoverable issue with correlation id 11 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-19T23:13:27.330+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 11 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-19T23:13:28.045+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-02e21be1-e894-44c3-898e-596b23538497-3, groupId=02e21be1-e894-44c3-898e-596b23538497] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-pap | [2025-06-19T23:13:28.050+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-02e21be1-e894-44c3-898e-596b23538497-3, groupId=02e21be1-e894-44c3-898e-596b23538497] (Re-)joining group policy-pap | [2025-06-19T23:13:28.078+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-02e21be1-e894-44c3-898e-596b23538497-3, groupId=02e21be1-e894-44c3-898e-596b23538497] Request joining group due to: need to re-join with the given member-id: consumer-02e21be1-e894-44c3-898e-596b23538497-3-e7d5e8b8-9898-453c-a3e2-45023dbc8ad5 policy-pap | [2025-06-19T23:13:28.079+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-02e21be1-e894-44c3-898e-596b23538497-3, groupId=02e21be1-e894-44c3-898e-596b23538497] (Re-)joining group policy-pap | [2025-06-19T23:13:28.162+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-pap | [2025-06-19T23:13:28.164+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-pap | [2025-06-19T23:13:28.175+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-f6dcbd9f-fe9b-4690-ba7e-683268a8f3a2 policy-pap | [2025-06-19T23:13:28.175+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-pap | [2025-06-19T23:13:31.099+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-02e21be1-e894-44c3-898e-596b23538497-3, groupId=02e21be1-e894-44c3-898e-596b23538497] Successfully joined group with generation Generation{generationId=1, memberId='consumer-02e21be1-e894-44c3-898e-596b23538497-3-e7d5e8b8-9898-453c-a3e2-45023dbc8ad5', protocol='range'} policy-pap | [2025-06-19T23:13:31.108+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-02e21be1-e894-44c3-898e-596b23538497-3, groupId=02e21be1-e894-44c3-898e-596b23538497] Finished assignment for group at generation 1: {consumer-02e21be1-e894-44c3-898e-596b23538497-3-e7d5e8b8-9898-453c-a3e2-45023dbc8ad5=Assignment(partitions=[policy-pdp-pap-0])} policy-pap | [2025-06-19T23:13:31.127+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-02e21be1-e894-44c3-898e-596b23538497-3, groupId=02e21be1-e894-44c3-898e-596b23538497] Successfully synced group in generation Generation{generationId=1, memberId='consumer-02e21be1-e894-44c3-898e-596b23538497-3-e7d5e8b8-9898-453c-a3e2-45023dbc8ad5', protocol='range'} policy-pap | [2025-06-19T23:13:31.128+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-02e21be1-e894-44c3-898e-596b23538497-3, groupId=02e21be1-e894-44c3-898e-596b23538497] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-pap | [2025-06-19T23:13:31.130+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-02e21be1-e894-44c3-898e-596b23538497-3, groupId=02e21be1-e894-44c3-898e-596b23538497] Adding newly assigned partitions: policy-pdp-pap-0 policy-pap | [2025-06-19T23:13:31.146+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-02e21be1-e894-44c3-898e-596b23538497-3, groupId=02e21be1-e894-44c3-898e-596b23538497] Found no committed offset for partition policy-pdp-pap-0 policy-pap | [2025-06-19T23:13:31.161+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-02e21be1-e894-44c3-898e-596b23538497-3, groupId=02e21be1-e894-44c3-898e-596b23538497] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | [2025-06-19T23:13:31.184+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-f6dcbd9f-fe9b-4690-ba7e-683268a8f3a2', protocol='range'} policy-pap | [2025-06-19T23:13:31.185+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-f6dcbd9f-fe9b-4690-ba7e-683268a8f3a2=Assignment(partitions=[policy-pdp-pap-0])} policy-pap | [2025-06-19T23:13:31.194+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-f6dcbd9f-fe9b-4690-ba7e-683268a8f3a2', protocol='range'} policy-pap | [2025-06-19T23:13:31.195+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-pap | [2025-06-19T23:13:31.195+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 policy-pap | [2025-06-19T23:13:31.197+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 policy-pap | [2025-06-19T23:13:31.199+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | [2025-06-19T23:13:41.625+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-1] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-pap | [2025-06-19T23:13:41.625+00:00|INFO|DispatcherServlet|http-nio-6969-exec-1] Initializing Servlet 'dispatcherServlet' policy-pap | [2025-06-19T23:13:41.627+00:00|INFO|DispatcherServlet|http-nio-6969-exec-1] Completed initialization in 1 ms policy-pap | [2025-06-19T23:13:48.270+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-heartbeat] ***** OrderedServiceImpl implementers: policy-pap | [] policy-pap | [2025-06-19T23:13:48.270+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"9c998ef2-f976-4d52-b973-6ae1119b8574","timestampMs":1750374828233,"name":"apex-1b756ea5-3034-4556-91e7-cbcf68544e19","pdpGroup":"defaultGroup"} policy-pap | [2025-06-19T23:13:48.270+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"9c998ef2-f976-4d52-b973-6ae1119b8574","timestampMs":1750374828233,"name":"apex-1b756ea5-3034-4556-91e7-cbcf68544e19","pdpGroup":"defaultGroup"} policy-pap | [2025-06-19T23:13:48.277+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-pap | [2025-06-19T23:13:48.354+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-1b756ea5-3034-4556-91e7-cbcf68544e19 PdpUpdate starting policy-pap | [2025-06-19T23:13:48.354+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-1b756ea5-3034-4556-91e7-cbcf68544e19 PdpUpdate starting listener policy-pap | [2025-06-19T23:13:48.354+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-1b756ea5-3034-4556-91e7-cbcf68544e19 PdpUpdate starting timer policy-pap | [2025-06-19T23:13:48.354+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=39ce5ee3-d10f-46cd-9314-200f4f87d9bd, expireMs=1750374858354] policy-pap | [2025-06-19T23:13:48.356+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-1b756ea5-3034-4556-91e7-cbcf68544e19 PdpUpdate starting enqueue policy-pap | [2025-06-19T23:13:48.356+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-1b756ea5-3034-4556-91e7-cbcf68544e19 PdpUpdate started policy-pap | [2025-06-19T23:13:48.356+00:00|INFO|TimerManager|Thread-9] update timer waiting 29998ms Timer [name=39ce5ee3-d10f-46cd-9314-200f4f87d9bd, expireMs=1750374858354] policy-pap | [2025-06-19T23:13:48.359+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-962859e8-b648-488a-a376-d0f11e9d4b11","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"39ce5ee3-d10f-46cd-9314-200f4f87d9bd","timestampMs":1750374828338,"name":"apex-1b756ea5-3034-4556-91e7-cbcf68544e19","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-19T23:13:48.389+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-962859e8-b648-488a-a376-d0f11e9d4b11","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"39ce5ee3-d10f-46cd-9314-200f4f87d9bd","timestampMs":1750374828338,"name":"apex-1b756ea5-3034-4556-91e7-cbcf68544e19","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-19T23:13:48.389+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-19T23:13:48.391+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-962859e8-b648-488a-a376-d0f11e9d4b11","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"39ce5ee3-d10f-46cd-9314-200f4f87d9bd","timestampMs":1750374828338,"name":"apex-1b756ea5-3034-4556-91e7-cbcf68544e19","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-19T23:13:48.391+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-19T23:13:48.433+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"8b6629c2-75d5-4ab2-a976-26e06c6c7269","timestampMs":1750374828421,"name":"apex-1b756ea5-3034-4556-91e7-cbcf68544e19","pdpGroup":"defaultGroup"} policy-pap | [2025-06-19T23:13:48.441+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"8b6629c2-75d5-4ab2-a976-26e06c6c7269","timestampMs":1750374828421,"name":"apex-1b756ea5-3034-4556-91e7-cbcf68544e19","pdpGroup":"defaultGroup"} policy-pap | [2025-06-19T23:13:48.442+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-pap | [2025-06-19T23:13:48.453+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"39ce5ee3-d10f-46cd-9314-200f4f87d9bd","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"67aff375-5ca8-4990-80d5-7c5519a1e210","timestampMs":1750374828423,"name":"apex-1b756ea5-3034-4556-91e7-cbcf68544e19","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-19T23:13:48.490+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-1b756ea5-3034-4556-91e7-cbcf68544e19 PdpUpdate stopping policy-pap | [2025-06-19T23:13:48.490+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-1b756ea5-3034-4556-91e7-cbcf68544e19 PdpUpdate stopping enqueue policy-pap | [2025-06-19T23:13:48.490+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-1b756ea5-3034-4556-91e7-cbcf68544e19 PdpUpdate stopping timer policy-pap | [2025-06-19T23:13:48.490+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=39ce5ee3-d10f-46cd-9314-200f4f87d9bd, expireMs=1750374858354] policy-pap | [2025-06-19T23:13:48.490+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-1b756ea5-3034-4556-91e7-cbcf68544e19 PdpUpdate stopping listener policy-pap | [2025-06-19T23:13:48.490+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-1b756ea5-3034-4556-91e7-cbcf68544e19 PdpUpdate stopped policy-pap | [2025-06-19T23:13:48.495+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"39ce5ee3-d10f-46cd-9314-200f4f87d9bd","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"67aff375-5ca8-4990-80d5-7c5519a1e210","timestampMs":1750374828423,"name":"apex-1b756ea5-3034-4556-91e7-cbcf68544e19","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-19T23:13:48.496+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 39ce5ee3-d10f-46cd-9314-200f4f87d9bd policy-pap | [2025-06-19T23:13:48.497+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-1b756ea5-3034-4556-91e7-cbcf68544e19 PdpUpdate successful policy-pap | [2025-06-19T23:13:48.497+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-1b756ea5-3034-4556-91e7-cbcf68544e19 start publishing next request policy-pap | [2025-06-19T23:13:48.497+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-1b756ea5-3034-4556-91e7-cbcf68544e19 PdpStateChange starting policy-pap | [2025-06-19T23:13:48.497+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-1b756ea5-3034-4556-91e7-cbcf68544e19 PdpStateChange starting listener policy-pap | [2025-06-19T23:13:48.497+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-1b756ea5-3034-4556-91e7-cbcf68544e19 PdpStateChange starting timer policy-pap | [2025-06-19T23:13:48.497+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=a7f59166-1371-41e3-986a-b174e5779032, expireMs=1750374858497] policy-pap | [2025-06-19T23:13:48.497+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-1b756ea5-3034-4556-91e7-cbcf68544e19 PdpStateChange starting enqueue policy-pap | [2025-06-19T23:13:48.497+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-1b756ea5-3034-4556-91e7-cbcf68544e19 PdpStateChange started policy-pap | [2025-06-19T23:13:48.497+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 30000ms Timer [name=a7f59166-1371-41e3-986a-b174e5779032, expireMs=1750374858497] policy-pap | [2025-06-19T23:13:48.498+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-962859e8-b648-488a-a376-d0f11e9d4b11","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"a7f59166-1371-41e3-986a-b174e5779032","timestampMs":1750374828339,"name":"apex-1b756ea5-3034-4556-91e7-cbcf68544e19","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-19T23:13:48.509+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-962859e8-b648-488a-a376-d0f11e9d4b11","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"a7f59166-1371-41e3-986a-b174e5779032","timestampMs":1750374828339,"name":"apex-1b756ea5-3034-4556-91e7-cbcf68544e19","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-19T23:13:48.509+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE policy-pap | [2025-06-19T23:13:48.521+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"a7f59166-1371-41e3-986a-b174e5779032","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"6c7610ed-8cd8-438b-bda8-e616f0557a79","timestampMs":1750374828511,"name":"apex-1b756ea5-3034-4556-91e7-cbcf68544e19","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-19T23:13:48.521+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id a7f59166-1371-41e3-986a-b174e5779032 policy-pap | [2025-06-19T23:13:48.526+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-962859e8-b648-488a-a376-d0f11e9d4b11","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"a7f59166-1371-41e3-986a-b174e5779032","timestampMs":1750374828339,"name":"apex-1b756ea5-3034-4556-91e7-cbcf68544e19","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-19T23:13:48.527+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE policy-pap | [2025-06-19T23:13:48.529+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"a7f59166-1371-41e3-986a-b174e5779032","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"6c7610ed-8cd8-438b-bda8-e616f0557a79","timestampMs":1750374828511,"name":"apex-1b756ea5-3034-4556-91e7-cbcf68544e19","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-19T23:13:48.529+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-1b756ea5-3034-4556-91e7-cbcf68544e19 PdpStateChange stopping policy-pap | [2025-06-19T23:13:48.529+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-1b756ea5-3034-4556-91e7-cbcf68544e19 PdpStateChange stopping enqueue policy-pap | [2025-06-19T23:13:48.529+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-1b756ea5-3034-4556-91e7-cbcf68544e19 PdpStateChange stopping timer policy-pap | [2025-06-19T23:13:48.529+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=a7f59166-1371-41e3-986a-b174e5779032, expireMs=1750374858497] policy-pap | [2025-06-19T23:13:48.530+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-1b756ea5-3034-4556-91e7-cbcf68544e19 PdpStateChange stopping listener policy-pap | [2025-06-19T23:13:48.530+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-1b756ea5-3034-4556-91e7-cbcf68544e19 PdpStateChange stopped policy-pap | [2025-06-19T23:13:48.530+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-1b756ea5-3034-4556-91e7-cbcf68544e19 PdpStateChange successful policy-pap | [2025-06-19T23:13:48.530+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-1b756ea5-3034-4556-91e7-cbcf68544e19 start publishing next request policy-pap | [2025-06-19T23:13:48.530+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-1b756ea5-3034-4556-91e7-cbcf68544e19 PdpUpdate starting policy-pap | [2025-06-19T23:13:48.530+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-1b756ea5-3034-4556-91e7-cbcf68544e19 PdpUpdate starting listener policy-pap | [2025-06-19T23:13:48.530+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-1b756ea5-3034-4556-91e7-cbcf68544e19 PdpUpdate starting timer policy-pap | [2025-06-19T23:13:48.530+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=82905cc9-0ce0-4032-b02a-5c15186e4845, expireMs=1750374858530] policy-pap | [2025-06-19T23:13:48.530+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-1b756ea5-3034-4556-91e7-cbcf68544e19 PdpUpdate starting enqueue policy-pap | [2025-06-19T23:13:48.530+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-1b756ea5-3034-4556-91e7-cbcf68544e19 PdpUpdate started policy-pap | [2025-06-19T23:13:48.530+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-962859e8-b648-488a-a376-d0f11e9d4b11","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"82905cc9-0ce0-4032-b02a-5c15186e4845","timestampMs":1750374828518,"name":"apex-1b756ea5-3034-4556-91e7-cbcf68544e19","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-19T23:13:48.538+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-962859e8-b648-488a-a376-d0f11e9d4b11","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"82905cc9-0ce0-4032-b02a-5c15186e4845","timestampMs":1750374828518,"name":"apex-1b756ea5-3034-4556-91e7-cbcf68544e19","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-19T23:13:48.538+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-962859e8-b648-488a-a376-d0f11e9d4b11","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"82905cc9-0ce0-4032-b02a-5c15186e4845","timestampMs":1750374828518,"name":"apex-1b756ea5-3034-4556-91e7-cbcf68544e19","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-19T23:13:48.539+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-19T23:13:48.541+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-19T23:13:48.548+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"82905cc9-0ce0-4032-b02a-5c15186e4845","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"8a596af8-95bb-4ffe-892a-245276960def","timestampMs":1750374828540,"name":"apex-1b756ea5-3034-4556-91e7-cbcf68544e19","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-19T23:13:48.549+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"82905cc9-0ce0-4032-b02a-5c15186e4845","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"8a596af8-95bb-4ffe-892a-245276960def","timestampMs":1750374828540,"name":"apex-1b756ea5-3034-4556-91e7-cbcf68544e19","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-19T23:13:48.550+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 82905cc9-0ce0-4032-b02a-5c15186e4845 policy-pap | [2025-06-19T23:13:48.550+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-1b756ea5-3034-4556-91e7-cbcf68544e19 PdpUpdate stopping policy-pap | [2025-06-19T23:13:48.550+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-1b756ea5-3034-4556-91e7-cbcf68544e19 PdpUpdate stopping enqueue policy-pap | [2025-06-19T23:13:48.550+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-1b756ea5-3034-4556-91e7-cbcf68544e19 PdpUpdate stopping timer policy-pap | [2025-06-19T23:13:48.550+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=82905cc9-0ce0-4032-b02a-5c15186e4845, expireMs=1750374858530] policy-pap | [2025-06-19T23:13:48.550+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-1b756ea5-3034-4556-91e7-cbcf68544e19 PdpUpdate stopping listener policy-pap | [2025-06-19T23:13:48.550+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-1b756ea5-3034-4556-91e7-cbcf68544e19 PdpUpdate stopped policy-pap | [2025-06-19T23:13:48.554+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-1b756ea5-3034-4556-91e7-cbcf68544e19 PdpUpdate successful policy-pap | [2025-06-19T23:13:48.554+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-1b756ea5-3034-4556-91e7-cbcf68544e19 has no more requests policy-pap | [2025-06-19T23:14:18.354+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=39ce5ee3-d10f-46cd-9314-200f4f87d9bd, expireMs=1750374858354] policy-pap | [2025-06-19T23:14:18.498+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=a7f59166-1371-41e3-986a-b174e5779032, expireMs=1750374858497] policy-pap | [2025-06-19T23:15:20.639+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls policy-pap | [2025-06-19T23:15:20.648+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls policy-pap | [2025-06-19T23:15:21.022+00:00|INFO|SessionData|http-nio-6969-exec-10] unknown group testGroup policy-pap | [2025-06-19T23:15:21.608+00:00|INFO|SessionData|http-nio-6969-exec-10] create cached group testGroup policy-pap | [2025-06-19T23:15:21.608+00:00|INFO|SessionData|http-nio-6969-exec-10] creating DB group testGroup policy-pap | [2025-06-19T23:15:22.102+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group testGroup policy-pap | [2025-06-19T23:15:22.386+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] Registering a deploy for policy onap.restart.tca 1.0.0 policy-pap | [2025-06-19T23:15:22.480+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] Registering a deploy for policy operational.apex.decisionMaker 1.0.0 policy-pap | [2025-06-19T23:15:22.480+00:00|INFO|SessionData|http-nio-6969-exec-5] update cached group testGroup policy-pap | [2025-06-19T23:15:22.481+00:00|INFO|SessionData|http-nio-6969-exec-5] updating DB group testGroup policy-pap | [2025-06-19T23:15:22.493+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2025-06-19T23:15:22Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=DEPLOYMENT, timestamp=2025-06-19T23:15:22Z, user=policyadmin)] policy-pap | [2025-06-19T23:15:23.165+00:00|INFO|SessionData|http-nio-6969-exec-3] cache group testGroup policy-pap | [2025-06-19T23:15:23.165+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-3] remove policy onap.restart.tca 1.0.0 from subgroup testGroup pdpTypeA count=0 policy-pap | [2025-06-19T23:15:23.165+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-3] Registering an undeploy for policy onap.restart.tca 1.0.0 policy-pap | [2025-06-19T23:15:23.165+00:00|INFO|SessionData|http-nio-6969-exec-3] update cached group testGroup policy-pap | [2025-06-19T23:15:23.166+00:00|INFO|SessionData|http-nio-6969-exec-3] updating DB group testGroup policy-pap | [2025-06-19T23:15:23.176+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-3] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2025-06-19T23:15:23Z, user=policyadmin)] policy-pap | [2025-06-19T23:15:23.540+00:00|INFO|SessionData|http-nio-6969-exec-8] cache group defaultGroup policy-pap | [2025-06-19T23:15:23.540+00:00|INFO|SessionData|http-nio-6969-exec-8] cache group testGroup policy-pap | [2025-06-19T23:15:23.540+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-8] remove policy operational.apex.decisionMaker 1.0.0 from subgroup testGroup pdpTypeC count=0 policy-pap | [2025-06-19T23:15:23.540+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-8] Registering an undeploy for policy operational.apex.decisionMaker 1.0.0 policy-pap | [2025-06-19T23:15:23.540+00:00|INFO|SessionData|http-nio-6969-exec-8] update cached group testGroup policy-pap | [2025-06-19T23:15:23.540+00:00|INFO|SessionData|http-nio-6969-exec-8] updating DB group testGroup policy-pap | [2025-06-19T23:15:23.548+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-8] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=UNDEPLOYMENT, timestamp=2025-06-19T23:15:23Z, user=policyadmin)] policy-pap | [2025-06-19T23:15:24.067+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group testGroup policy-pap | [2025-06-19T23:15:24.070+00:00|INFO|SessionData|http-nio-6969-exec-5] deleting DB group testGroup policy-pap | [2025-06-19T23:15:26.052+00:00|INFO|PdpModifyRequestMap|pool-3-thread-1] check for PDP records older than 360000ms policy-pap | [2025-06-19T23:15:48.419+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","policies":[],"messageName":"PDP_STATUS","requestId":"6b4f1297-9901-4ef3-a620-605458caba61","timestampMs":1750374948408,"name":"apex-1b756ea5-3034-4556-91e7-cbcf68544e19","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-19T23:15:48.420+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-pap | [2025-06-19T23:15:48.420+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","policies":[],"messageName":"PDP_STATUS","requestId":"6b4f1297-9901-4ef3-a620-605458caba61","timestampMs":1750374948408,"name":"apex-1b756ea5-3034-4556-91e7-cbcf68544e19","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} postgres | The files belonging to this database system will be owned by user "postgres". postgres | This user must also own the server process. postgres | postgres | The database cluster will be initialized with locale "en_US.utf8". postgres | The default database encoding has accordingly been set to "UTF8". postgres | The default text search configuration will be set to "english". postgres | postgres | Data page checksums are disabled. postgres | postgres | fixing permissions on existing directory /var/lib/postgresql/data ... ok postgres | creating subdirectories ... ok postgres | selecting dynamic shared memory implementation ... posix postgres | selecting default max_connections ... 100 postgres | selecting default shared_buffers ... 128MB postgres | selecting default time zone ... Etc/UTC postgres | creating configuration files ... ok postgres | running bootstrap script ... ok postgres | performing post-bootstrap initialization ... ok postgres | syncing data to disk ... ok postgres | postgres | postgres | Success. You can now start the database server using: postgres | postgres | pg_ctl -D /var/lib/postgresql/data -l logfile start postgres | postgres | initdb: warning: enabling "trust" authentication for local connections postgres | initdb: hint: You can change this by editing pg_hba.conf or using the option -A, or --auth-local and --auth-host, the next time you run initdb. postgres | waiting for server to start....2025-06-19 23:12:50.162 UTC [48] LOG: starting PostgreSQL 16.4 (Debian 16.4-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit postgres | 2025-06-19 23:12:50.163 UTC [48] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" postgres | 2025-06-19 23:12:50.169 UTC [51] LOG: database system was shut down at 2025-06-19 23:12:49 UTC postgres | 2025-06-19 23:12:50.175 UTC [48] LOG: database system is ready to accept connections postgres | done postgres | server started postgres | postgres | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db-pg.conf postgres | postgres | /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db-pg.sh postgres | #!/bin/bash -xv postgres | # Copyright (C) 2022, 2024 Nordix Foundation. All rights reserved postgres | # postgres | # Licensed under the Apache License, Version 2.0 (the "License"); postgres | # you may not use this file except in compliance with the License. postgres | # You may obtain a copy of the License at postgres | # postgres | # http://www.apache.org/licenses/LICENSE-2.0 postgres | # postgres | # Unless required by applicable law or agreed to in writing, software postgres | # distributed under the License is distributed on an "AS IS" BASIS, postgres | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. postgres | # See the License for the specific language governing permissions and postgres | # limitations under the License. postgres | postgres | psql -U postgres -d postgres --command "CREATE USER ${PGSQL_USER} WITH PASSWORD '${PGSQL_PASSWORD}';" postgres | + psql -U postgres -d postgres --command 'CREATE USER policy_user WITH PASSWORD '\''policy_user'\'';' postgres | CREATE ROLE postgres | postgres | for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | do postgres | psql -U postgres -d postgres --command "CREATE DATABASE ${db};" postgres | psql -U postgres -d postgres --command "ALTER DATABASE ${db} OWNER TO ${PGSQL_USER} ;" postgres | psql -U postgres -d postgres --command "GRANT ALL PRIVILEGES ON DATABASE ${db} TO ${PGSQL_USER} ;" postgres | done postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE migration;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE migration OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE migration TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE pooling;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE pooling OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE pooling TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE policyadmin;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE policyadmin OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE policyadmin TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE policyclamp;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE policyclamp OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE policyclamp TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE operationshistory;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE operationshistory OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE operationshistory TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE clampacm;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE clampacm OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE clampacm TO policy_user ;' postgres | GRANT postgres | postgres | waiting for server to shut down...2025-06-19 23:12:51.609 UTC [48] LOG: received fast shutdown request postgres | .2025-06-19 23:12:51.611 UTC [48] LOG: aborting any active transactions postgres | 2025-06-19 23:12:51.613 UTC [48] LOG: background worker "logical replication launcher" (PID 54) exited with exit code 1 postgres | 2025-06-19 23:12:51.615 UTC [49] LOG: shutting down postgres | 2025-06-19 23:12:51.617 UTC [49] LOG: checkpoint starting: shutdown immediate postgres | 2025-06-19 23:12:52.108 UTC [49] LOG: checkpoint complete: wrote 5511 buffers (33.6%); 0 WAL file(s) added, 0 removed, 1 recycled; write=0.392 s, sync=0.091 s, total=0.493 s; sync files=1788, longest=0.005 s, average=0.001 s; distance=25535 kB, estimate=25535 kB; lsn=0/2DDA218, redo lsn=0/2DDA218 postgres | 2025-06-19 23:12:52.117 UTC [48] LOG: database system is shut down postgres | done postgres | server stopped postgres | postgres | PostgreSQL init process complete; ready for start up. postgres | postgres | 2025-06-19 23:12:52.147 UTC [1] LOG: starting PostgreSQL 16.4 (Debian 16.4-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit postgres | 2025-06-19 23:12:52.147 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 postgres | 2025-06-19 23:12:52.147 UTC [1] LOG: listening on IPv6 address "::", port 5432 postgres | 2025-06-19 23:12:52.150 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" postgres | 2025-06-19 23:12:52.155 UTC [101] LOG: database system was shut down at 2025-06-19 23:12:52 UTC postgres | 2025-06-19 23:12:52.164 UTC [1] LOG: database system is ready to accept connections prometheus | time=2025-06-19T23:12:45.122Z level=INFO source=main.go:674 msg="No time or size retention was set so using the default time retention" duration=15d prometheus | time=2025-06-19T23:12:45.122Z level=INFO source=main.go:725 msg="Starting Prometheus Server" mode=server version="(version=3.4.1, branch=HEAD, revision=aea6503d9bbaad6c5faff3ecf6f1025213356c92)" prometheus | time=2025-06-19T23:12:45.122Z level=INFO source=main.go:730 msg="operational information" build_context="(go=go1.24.3, platform=linux/amd64, user=root@16f976c24db1, date=20250531-10:44:38, tags=netgo,builtinassets,stringlabels)" host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" fd_limits="(soft=1048576, hard=1048576)" vm_limits="(soft=unlimited, hard=unlimited)" prometheus | time=2025-06-19T23:12:45.123Z level=INFO source=main.go:806 msg="Leaving GOMAXPROCS=8: CPU quota undefined" component=automaxprocs prometheus | time=2025-06-19T23:12:45.127Z level=INFO source=web.go:656 msg="Start listening for connections" component=web address=0.0.0.0:9090 prometheus | time=2025-06-19T23:12:45.129Z level=INFO source=main.go:1266 msg="Starting TSDB ..." prometheus | time=2025-06-19T23:12:45.132Z level=INFO source=head.go:657 msg="Replaying on-disk memory mappable chunks if any" component=tsdb prometheus | time=2025-06-19T23:12:45.132Z level=INFO source=head.go:744 msg="On-disk memory mappable chunks replay completed" component=tsdb duration=2.39µs prometheus | time=2025-06-19T23:12:45.132Z level=INFO source=head.go:752 msg="Replaying WAL, this may take a while" component=tsdb prometheus | time=2025-06-19T23:12:45.132Z level=INFO source=tls_config.go:347 msg="Listening on" component=web address=[::]:9090 prometheus | time=2025-06-19T23:12:45.133Z level=INFO source=tls_config.go:350 msg="TLS is disabled." component=web http2=false address=[::]:9090 prometheus | time=2025-06-19T23:12:45.133Z level=INFO source=head.go:825 msg="WAL segment loaded" component=tsdb segment=0 maxSegment=0 duration=221.753µs prometheus | time=2025-06-19T23:12:45.133Z level=INFO source=head.go:862 msg="WAL replay completed" component=tsdb checkpoint_replay_duration=20.71µs wal_replay_duration=241.793µs wbl_replay_duration=280ns chunk_snapshot_load_duration=0s mmap_chunk_replay_duration=2.39µs total_replay_duration=319.694µs prometheus | time=2025-06-19T23:12:45.136Z level=INFO source=main.go:1287 msg="filesystem information" fs_type=EXT4_SUPER_MAGIC prometheus | time=2025-06-19T23:12:45.136Z level=INFO source=main.go:1290 msg="TSDB started" prometheus | time=2025-06-19T23:12:45.136Z level=INFO source=main.go:1475 msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml prometheus | time=2025-06-19T23:12:45.139Z level=INFO source=main.go:1514 msg="updated GOGC" old=100 new=75 prometheus | time=2025-06-19T23:12:45.139Z level=INFO source=main.go:1524 msg="Completed loading of configuration file" db_storage=2.68µs remote_storage=4.19µs web_handler=820ns query_engine=1.73µs scrape=425.705µs scrape_sd=327.524µs notify=206.562µs notify_sd=24.601µs rules=2.13µs tracing=7.49µs filename=/etc/prometheus/prometheus.yml totalDuration=2.695971ms prometheus | time=2025-06-19T23:12:45.139Z level=INFO source=main.go:1251 msg="Server is ready to receive web requests." prometheus | time=2025-06-19T23:12:45.139Z level=INFO source=manager.go:175 msg="Starting rule manager..." component="rule manager" simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json simulator | overriding logback.xml simulator | 2025-06-19 23:12:44,463 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json simulator | 2025-06-19 23:12:44,522 INFO org.onap.policy.models.simulators starting simulator | 2025-06-19 23:12:44,522 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties simulator | 2025-06-19 23:12:44,735 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION simulator | 2025-06-19 23:12:44,736 INFO org.onap.policy.models.simulators starting A&AI simulator simulator | 2025-06-19 23:12:44,936 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-13c3c1e1==org.glassfish.jersey.servlet.ServletContainer@f0de2a7a{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@30f5a68a{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@24d4d7c9{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@f0e995e{STOPPED}}, connector=A&AI simulator@4c37b5b{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-13c3c1e1==org.glassfish.jersey.servlet.ServletContainer@f0de2a7a{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: WAITED-START simulator | 2025-06-19 23:12:44,948 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-13c3c1e1==org.glassfish.jersey.servlet.ServletContainer@f0de2a7a{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@30f5a68a{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@24d4d7c9{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@f0e995e{STOPPED}}, connector=A&AI simulator@4c37b5b{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-13c3c1e1==org.glassfish.jersey.servlet.ServletContainer@f0de2a7a{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: STARTING simulator | 2025-06-19 23:12:44,951 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-13c3c1e1==org.glassfish.jersey.servlet.ServletContainer@f0de2a7a{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@30f5a68a{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@24d4d7c9{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@f0e995e{STOPPED}}, connector=A&AI simulator@4c37b5b{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-13c3c1e1==org.glassfish.jersey.servlet.ServletContainer@f0de2a7a{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: RUN simulator | 2025-06-19 23:12:44,956 INFO jetty-12.0.21; built: 2025-05-09T00:32:00.688Z; git: 1c4719601e31b05b7d68910d2edd980259f1f53c; jvm 17.0.15+6-alpine-r0 simulator | 2025-06-19 23:12:44,996 INFO Session workerName=node0 simulator | 2025-06-19 23:12:45,007 INFO Started oeje10s.ServletContextHandler@24d4d7c9{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@f0e995e{STARTED}} simulator | 2025-06-19 23:12:45,557 INFO Using GSON for REST calls simulator | 2025-06-19 23:12:45,625 INFO Started oeje10s.ServletContextHandler@24d4d7c9{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@f0e995e{STARTED}} simulator | 2025-06-19 23:12:45,632 INFO Started A&AI simulator@4c37b5b{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} simulator | 2025-06-19 23:12:45,633 INFO Started oejs.Server@30f5a68a{STARTING}[12.0.21,sto=0] @1648ms simulator | 2025-06-19 23:12:45,633 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-13c3c1e1==org.glassfish.jersey.servlet.ServletContainer@f0de2a7a{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@30f5a68a{STARTED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@24d4d7c9{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@f0e995e{STARTED}}, connector=A&AI simulator@4c37b5b{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-13c3c1e1==org.glassfish.jersey.servlet.ServletContainer@f0de2a7a{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:,STARTED}})]: pending time is 4317 ms. simulator | 2025-06-19 23:12:45,637 INFO org.onap.policy.models.simulators starting SDNC simulator simulator | 2025-06-19 23:12:45,639 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-3829ac1==org.glassfish.jersey.servlet.ServletContainer@5ea3d315{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@4baf352a{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@1bb1fde8{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@15eebbff{STOPPED}}, connector=SDNC simulator@22d6f11{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3829ac1==org.glassfish.jersey.servlet.ServletContainer@5ea3d315{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: WAITED-START simulator | 2025-06-19 23:12:45,640 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-3829ac1==org.glassfish.jersey.servlet.ServletContainer@5ea3d315{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@4baf352a{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@1bb1fde8{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@15eebbff{STOPPED}}, connector=SDNC simulator@22d6f11{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3829ac1==org.glassfish.jersey.servlet.ServletContainer@5ea3d315{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: STARTING simulator | 2025-06-19 23:12:45,641 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-3829ac1==org.glassfish.jersey.servlet.ServletContainer@5ea3d315{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@4baf352a{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@1bb1fde8{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@15eebbff{STOPPED}}, connector=SDNC simulator@22d6f11{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3829ac1==org.glassfish.jersey.servlet.ServletContainer@5ea3d315{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: RUN simulator | 2025-06-19 23:12:45,642 INFO jetty-12.0.21; built: 2025-05-09T00:32:00.688Z; git: 1c4719601e31b05b7d68910d2edd980259f1f53c; jvm 17.0.15+6-alpine-r0 simulator | 2025-06-19 23:12:45,650 INFO Session workerName=node0 simulator | 2025-06-19 23:12:45,652 INFO Started oeje10s.ServletContextHandler@1bb1fde8{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@15eebbff{STARTED}} simulator | 2025-06-19 23:12:45,705 INFO Using GSON for REST calls simulator | 2025-06-19 23:12:45,715 INFO Started oeje10s.ServletContextHandler@1bb1fde8{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@15eebbff{STARTED}} simulator | 2025-06-19 23:12:45,717 INFO Started SDNC simulator@22d6f11{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} simulator | 2025-06-19 23:12:45,717 INFO Started oejs.Server@4baf352a{STARTING}[12.0.21,sto=0] @1732ms simulator | 2025-06-19 23:12:45,717 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-3829ac1==org.glassfish.jersey.servlet.ServletContainer@5ea3d315{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@4baf352a{STARTED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@1bb1fde8{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@15eebbff{STARTED}}, connector=SDNC simulator@22d6f11{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3829ac1==org.glassfish.jersey.servlet.ServletContainer@5ea3d315{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:,STARTED}})]: pending time is 4924 ms. simulator | 2025-06-19 23:12:45,719 INFO org.onap.policy.models.simulators starting SO simulator simulator | 2025-06-19 23:12:45,723 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-2dbe250d==org.glassfish.jersey.servlet.ServletContainer@79c62ffa{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@553f1d75{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@6e1d8f9e{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@3e34ace1{STOPPED}}, connector=SO simulator@62fe6067{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2dbe250d==org.glassfish.jersey.servlet.ServletContainer@79c62ffa{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: WAITED-START simulator | 2025-06-19 23:12:45,724 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-2dbe250d==org.glassfish.jersey.servlet.ServletContainer@79c62ffa{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@553f1d75{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@6e1d8f9e{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@3e34ace1{STOPPED}}, connector=SO simulator@62fe6067{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2dbe250d==org.glassfish.jersey.servlet.ServletContainer@79c62ffa{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: STARTING simulator | 2025-06-19 23:12:45,726 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-2dbe250d==org.glassfish.jersey.servlet.ServletContainer@79c62ffa{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@553f1d75{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@6e1d8f9e{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@3e34ace1{STOPPED}}, connector=SO simulator@62fe6067{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2dbe250d==org.glassfish.jersey.servlet.ServletContainer@79c62ffa{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: RUN simulator | 2025-06-19 23:12:45,727 INFO jetty-12.0.21; built: 2025-05-09T00:32:00.688Z; git: 1c4719601e31b05b7d68910d2edd980259f1f53c; jvm 17.0.15+6-alpine-r0 simulator | 2025-06-19 23:12:45,736 INFO Session workerName=node0 simulator | 2025-06-19 23:12:45,738 INFO Started oeje10s.ServletContextHandler@6e1d8f9e{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@3e34ace1{STARTED}} simulator | 2025-06-19 23:12:45,788 INFO Using GSON for REST calls simulator | 2025-06-19 23:12:45,800 INFO Started oeje10s.ServletContextHandler@6e1d8f9e{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@3e34ace1{STARTED}} simulator | 2025-06-19 23:12:45,801 INFO Started SO simulator@62fe6067{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} simulator | 2025-06-19 23:12:45,802 INFO Started oejs.Server@553f1d75{STARTING}[12.0.21,sto=0] @1816ms simulator | 2025-06-19 23:12:45,802 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-2dbe250d==org.glassfish.jersey.servlet.ServletContainer@79c62ffa{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@553f1d75{STARTED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@6e1d8f9e{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@3e34ace1{STARTED}}, connector=SO simulator@62fe6067{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2dbe250d==org.glassfish.jersey.servlet.ServletContainer@79c62ffa{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:,STARTED}})]: pending time is 4924 ms. simulator | 2025-06-19 23:12:45,803 INFO org.onap.policy.models.simulators started zookeeper | ===> User zookeeper | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) zookeeper | ===> Configuring ... zookeeper | ===> Running preflight checks ... zookeeper | ===> Check if /var/lib/zookeeper/data is writable ... zookeeper | ===> Check if /var/lib/zookeeper/log is writable ... zookeeper | ===> Launching ... zookeeper | ===> Launching zookeeper ... zookeeper | [2025-06-19 23:12:50,250] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-19 23:12:50,252] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-19 23:12:50,252] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-19 23:12:50,252] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-19 23:12:50,252] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-19 23:12:50,255] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2025-06-19 23:12:50,255] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2025-06-19 23:12:50,255] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2025-06-19 23:12:50,255] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) zookeeper | [2025-06-19 23:12:50,256] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) zookeeper | [2025-06-19 23:12:50,256] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-19 23:12:50,257] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-19 23:12:50,257] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-19 23:12:50,257] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-19 23:12:50,257] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-19 23:12:50,257] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) zookeeper | [2025-06-19 23:12:50,267] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@3bbc39f8 (org.apache.zookeeper.server.ServerMetrics) zookeeper | [2025-06-19 23:12:50,270] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper | [2025-06-19 23:12:50,270] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper | [2025-06-19 23:12:50,272] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2025-06-19 23:12:50,280] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 23:12:50,280] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 23:12:50,280] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 23:12:50,280] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 23:12:50,280] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 23:12:50,280] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 23:12:50,280] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 23:12:50,280] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 23:12:50,280] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 23:12:50,280] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 23:12:50,281] INFO Server environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 23:12:50,281] INFO Server environment:host.name=zookeeper (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 23:12:50,281] INFO Server environment:java.version=17.0.14 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 23:12:50,281] INFO Server environment:java.vendor=Eclipse Adoptium (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 23:12:50,281] INFO Server environment:java.home=/usr/lib/jvm/temurin-17-jre (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 23:12:50,281] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/kafka-streams-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-transaction-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/connect-transforms-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-clients-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/connect-mirror-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.16.2.jar:/usr/bin/../share/java/kafka/scala-library-2.13.15.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.118.Final.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.16.2.jar:/usr/bin/../share/java/kafka/connect-runtime-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-afterburner-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/protobuf-java-3.25.5.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/maven-artifact-3.9.6.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/trogdor-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-server-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.15.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/commons-lang3-3.12.0.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.118.Final.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.6-4.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-tools-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jackson-databind-2.16.2.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/opentelemetry-proto-1.0.0-alpha.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-json-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.16.2.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/kafka-raft-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/plexus-utils-3.5.1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/commons-io-2.14.0.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.16.2.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/jackson-core-2.16.2.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.118.Final.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-storage-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.57.v20241219.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 23:12:50,282] INFO Server environment:java.library.path=/usr/local/lib64:/usr/local/lib::/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 23:12:50,282] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 23:12:50,282] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 23:12:50,282] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 23:12:50,282] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 23:12:50,282] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 23:12:50,282] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 23:12:50,282] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 23:12:50,282] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 23:12:50,282] INFO Server environment:os.memory.free=495MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 23:12:50,282] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 23:12:50,282] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 23:12:50,282] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 23:12:50,282] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 23:12:50,282] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 23:12:50,282] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 23:12:50,282] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 23:12:50,282] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 23:12:50,282] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 23:12:50,283] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) zookeeper | [2025-06-19 23:12:50,284] INFO minSessionTimeout set to 6000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 23:12:50,284] INFO maxSessionTimeout set to 60000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 23:12:50,285] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper | [2025-06-19 23:12:50,286] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper | [2025-06-19 23:12:50,286] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-19 23:12:50,286] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-19 23:12:50,286] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-19 23:12:50,286] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-19 23:12:50,286] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-19 23:12:50,286] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-19 23:12:50,288] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 23:12:50,288] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 23:12:50,289] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) zookeeper | [2025-06-19 23:12:50,289] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) zookeeper | [2025-06-19 23:12:50,289] INFO Created server with tickTime 3000 ms minSessionTimeout 6000 ms maxSessionTimeout 60000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 23:12:50,319] INFO Logging initialized @427ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) zookeeper | [2025-06-19 23:12:50,375] WARN o.e.j.s.ServletContextHandler@6150c3ec{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2025-06-19 23:12:50,375] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2025-06-19 23:12:50,391] INFO jetty-9.4.57.v20241219; built: 2025-01-08T21:24:30.412Z; git: df524e6b29271c2e09ba9aea83c18dc9db464a31; jvm 17.0.14+7 (org.eclipse.jetty.server.Server) zookeeper | [2025-06-19 23:12:50,423] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) zookeeper | [2025-06-19 23:12:50,423] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) zookeeper | [2025-06-19 23:12:50,424] INFO node0 Scavenging every 600000ms (org.eclipse.jetty.server.session) zookeeper | [2025-06-19 23:12:50,427] WARN ServletContext@o.e.j.s.ServletContextHandler@6150c3ec{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) zookeeper | [2025-06-19 23:12:50,435] INFO Started o.e.j.s.ServletContextHandler@6150c3ec{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2025-06-19 23:12:50,444] INFO Started ServerConnector@222545dc{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) zookeeper | [2025-06-19 23:12:50,445] INFO Started @558ms (org.eclipse.jetty.server.Server) zookeeper | [2025-06-19 23:12:50,445] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) zookeeper | [2025-06-19 23:12:50,448] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper | [2025-06-19 23:12:50,449] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper | [2025-06-19 23:12:50,450] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper | [2025-06-19 23:12:50,451] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper | [2025-06-19 23:12:50,466] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper | [2025-06-19 23:12:50,466] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper | [2025-06-19 23:12:50,466] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2025-06-19 23:12:50,466] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2025-06-19 23:12:50,487] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) zookeeper | [2025-06-19 23:12:50,487] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2025-06-19 23:12:50,490] INFO Snapshot loaded in 24 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2025-06-19 23:12:50,491] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2025-06-19 23:12:50,491] INFO Snapshot taken in 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 23:12:50,498] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) zookeeper | [2025-06-19 23:12:50,498] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) zookeeper | [2025-06-19 23:12:50,511] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) zookeeper | [2025-06-19 23:12:50,511] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) zookeeper | [2025-06-19 23:12:51,491] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) Tearing down containers... Container grafana Stopping Container policy-csit Stopping Container policy-csit Stopped Container policy-csit Removing Container policy-apex-pdp Stopping Container policy-csit Removed Container grafana Stopped Container grafana Removing Container grafana Removed Container prometheus Stopping Container prometheus Stopped Container prometheus Removing Container prometheus Removed Container policy-apex-pdp Stopped Container policy-apex-pdp Removing Container policy-apex-pdp Removed Container simulator Stopping Container policy-pap Stopping Container simulator Stopped Container simulator Removing Container simulator Removed Container policy-pap Stopped Container policy-pap Removing Container policy-pap Removed Container policy-api Stopping Container kafka Stopping Container kafka Stopped Container kafka Removing Container kafka Removed Container zookeeper Stopping Container zookeeper Stopped Container zookeeper Removing Container zookeeper Removed Container policy-api Stopped Container policy-api Removing Container policy-api Removed Container policy-db-migrator Stopping Container policy-db-migrator Stopped Container policy-db-migrator Removing Container policy-db-migrator Removed Container postgres Stopping Container postgres Stopped Container postgres Removing Container postgres Removed Network compose_default Removing Network compose_default Removed $ ssh-agent -k unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 2068 killed; [ssh-agent] Stopped. Robot results publisher started... INFO: Checking test criticality is deprecated and will be dropped in a future release! -Parsing output xml: Done! -Copying log files to build dir: Done! -Assigning results to build: Done! -Checking thresholds: Done! Done publishing Robot results. [PostBuildScript] - [INFO] Executing post build scripts. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins10424239424434819826.sh ---> sysstat.sh [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins1085316571583774460.sh ---> package-listing.sh ++ facter osfamily ++ tr '[:upper:]' '[:lower:]' + OS_FAMILY=debian + workspace=/w/workspace/policy-pap-master-project-csit-pap + START_PACKAGES=/tmp/packages_start.txt + END_PACKAGES=/tmp/packages_end.txt + DIFF_PACKAGES=/tmp/packages_diff.txt + PACKAGES=/tmp/packages_start.txt + '[' /w/workspace/policy-pap-master-project-csit-pap ']' + PACKAGES=/tmp/packages_end.txt + case "${OS_FAMILY}" in + dpkg -l + grep '^ii' + '[' -f /tmp/packages_start.txt ']' + '[' -f /tmp/packages_end.txt ']' + diff /tmp/packages_start.txt /tmp/packages_end.txt + '[' /w/workspace/policy-pap-master-project-csit-pap ']' + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/archives/ + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-pap-master-project-csit-pap/archives/ [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins4831985021428314300.sh ---> capture-instance-metadata.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-wKOo from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-wKOo/bin to PATH INFO: Running in OpenStack, capturing instance metadata [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins7906504458489102098.sh provisioning config files... copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-pap-master-project-csit-pap@tmp/config4148496003903140468tmp Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] Run condition [Regular expression match] preventing perform for step [Provide Configuration files] [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SERVER_ID=logs [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins10068720891813159004.sh ---> create-netrc.sh [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins14432802601145153745.sh ---> python-tools-install.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-wKOo from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-wKOo/bin to PATH [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins16294878743677001160.sh ---> sudo-logs.sh Archiving 'sudo' log.. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins12294978833757230688.sh ---> job-cost.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-wKOo from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 lf-activate-venv(): INFO: Adding /tmp/venv-wKOo/bin to PATH INFO: No Stack... INFO: Retrieving Pricing Info for: v3-standard-8 INFO: Archiving Costs [policy-pap-master-project-csit-pap] $ /bin/bash -l /tmp/jenkins6957895101830725601.sh ---> logs-deploy.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-wKOo from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-wKOo/bin to PATH INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-pap-master-project-csit-pap/2107 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt Archives upload complete. INFO: archiving logs to Nexus ---> uname -a: Linux prd-ubuntu1804-docker-8c-8g-22423 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux ---> lscpu: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 8 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC-Rome Processor Stepping: 0 CPU MHz: 2800.000 BogoMIPS: 5600.00 Virtualization: AMD-V Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-7 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities ---> nproc: 8 ---> df -h: Filesystem Size Used Avail Use% Mounted on udev 16G 0 16G 0% /dev tmpfs 3.2G 708K 3.2G 1% /run /dev/vda1 155G 16G 140G 10% / tmpfs 16G 0 16G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/vda15 105M 4.4M 100M 5% /boot/efi tmpfs 3.2G 0 3.2G 0% /run/user/1001 ---> free -m: total used free shared buff/cache available Mem: 32167 863 23289 0 8014 30848 Swap: 1023 0 1023 ---> ip addr: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 link/ether fa:16:3e:9b:0d:98 brd ff:ff:ff:ff:ff:ff inet 10.30.106.95/23 brd 10.30.107.255 scope global dynamic ens3 valid_lft 85958sec preferred_lft 85958sec inet6 fe80::f816:3eff:fe9b:d98/64 scope link valid_lft forever preferred_lft forever 3: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:c1:bc:22:4f brd ff:ff:ff:ff:ff:ff inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:c1ff:febc:224f/64 scope link valid_lft forever preferred_lft forever ---> sar -b -r -n DEV: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-22423) 06/19/25 _x86_64_ (8 CPU) 23:10:21 LINUX RESTART (8 CPU) 23:11:02 tps rtps wtps bread/s bwrtn/s 23:12:01 146.09 23.74 122.35 2378.85 44030.77 23:13:01 697.98 3.47 694.52 435.93 253717.05 23:14:01 50.02 0.15 49.88 16.93 40167.57 23:15:01 242.88 0.32 242.56 29.46 67873.62 23:16:01 8.30 0.00 8.30 0.00 3405.57 23:17:01 61.07 0.65 60.42 31.46 1001.57 Average: 201.21 4.67 196.54 476.82 68433.80 23:11:02 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 23:12:01 27591728 31555924 5347484 16.23 90168 4134292 2458668 7.23 1033704 3913428 2382316 23:13:01 23547188 30908824 9392024 28.51 163444 7258556 6888836 20.27 1881796 6840664 59932 23:14:01 22315628 29767288 10623584 32.25 164980 7349892 8402252 24.72 3115316 6822828 696 23:15:01 21558996 29551192 11380216 34.55 206044 7796320 8810968 25.92 3419884 7211288 1940 23:16:01 21605368 29598584 11333844 34.41 206200 7797244 8792604 25.87 3378208 7207648 284 23:17:01 23918220 31648164 9020992 27.39 206920 7528260 1559036 4.59 1387100 6965016 280 Average: 23422855 30504996 9516357 28.89 172959 6977427 6152061 18.10 2369335 6493479 407575 23:11:02 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 23:12:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:12:01 ens3 1074.58 659.01 26213.60 54.95 0.00 0.00 0.00 0.00 23:12:01 lo 13.29 13.29 1.25 1.25 0.00 0.00 0.00 0.00 23:13:01 veth7dcf7c0 0.37 0.53 0.02 0.03 0.00 0.00 0.00 0.00 23:13:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:13:01 veth0cf8b55 1.30 1.77 0.15 0.17 0.00 0.00 0.00 0.00 23:13:01 ens3 597.42 330.63 17940.97 28.22 0.00 0.00 0.00 0.00 23:14:01 veth7dcf7c0 10.33 10.93 2.09 1.45 0.00 0.00 0.00 0.00 23:14:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:14:01 veth0cf8b55 3.53 4.13 0.61 0.68 0.00 0.00 0.00 0.00 23:14:01 ens3 3.43 1.90 5.83 0.80 0.00 0.00 0.00 0.00 23:15:01 veth7dcf7c0 6.47 9.38 1.51 0.73 0.00 0.00 0.00 0.00 23:15:01 docker0 150.06 193.93 9.31 1348.89 0.00 0.00 0.00 0.00 23:15:01 veth0cf8b55 0.17 0.38 0.01 0.03 0.00 0.00 0.00 0.00 23:15:01 ens3 251.76 178.34 2197.51 13.80 0.00 0.00 0.00 0.00 23:16:01 veth7dcf7c0 158.16 160.09 19.61 38.18 0.00 0.00 0.00 0.00 23:16:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:16:01 veth0cf8b55 0.17 0.33 0.01 0.02 0.00 0.00 0.00 0.00 23:16:01 ens3 1.43 1.40 0.34 0.50 0.00 0.00 0.00 0.00 23:17:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:17:01 ens3 36.79 32.04 60.10 25.90 0.00 0.00 0.00 0.00 23:17:01 lo 27.83 27.83 2.50 2.50 0.00 0.00 0.00 0.00 Average: docker0 25.08 32.41 1.56 225.44 0.00 0.00 0.00 0.00 Average: ens3 325.49 199.28 7684.93 20.60 0.00 0.00 0.00 0.00 Average: lo 3.99 3.99 0.36 0.36 0.00 0.00 0.00 0.00 ---> sar -P ALL: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-22423) 06/19/25 _x86_64_ (8 CPU) 23:10:21 LINUX RESTART (8 CPU) 23:11:02 CPU %user %nice %system %iowait %steal %idle 23:12:01 all 12.94 0.00 2.83 2.39 0.04 81.80 23:12:01 0 8.64 0.00 3.03 8.16 0.05 80.12 23:12:01 1 11.13 0.00 2.23 5.89 0.07 80.68 23:12:01 2 22.33 0.00 3.27 0.39 0.05 73.95 23:12:01 3 6.57 0.00 2.11 0.14 0.03 91.15 23:12:01 4 6.18 0.00 2.65 0.97 0.03 90.16 23:12:01 5 14.03 0.00 3.03 1.41 0.05 81.47 23:12:01 6 5.18 0.00 2.86 0.41 0.03 91.52 23:12:01 7 29.47 0.00 3.40 1.73 0.05 65.35 23:13:01 all 18.97 0.00 7.98 10.05 0.09 62.90 23:13:01 0 17.96 0.00 7.27 4.49 0.08 70.20 23:13:01 1 20.10 0.00 7.37 2.14 0.10 70.29 23:13:01 2 19.36 0.00 9.62 29.38 0.12 41.52 23:13:01 3 20.32 0.00 8.28 8.38 0.07 62.95 23:13:01 4 19.13 0.00 8.78 14.28 0.08 57.72 23:13:01 5 18.43 0.00 7.43 11.60 0.08 62.46 23:13:01 6 18.64 0.00 7.54 5.13 0.10 68.59 23:13:01 7 17.87 0.00 7.62 5.20 0.08 69.23 23:14:01 all 22.05 0.00 2.05 0.97 0.08 74.84 23:14:01 0 20.32 0.00 2.09 0.05 0.07 77.47 23:14:01 1 22.53 0.00 2.09 0.12 0.08 75.17 23:14:01 2 29.24 0.00 2.54 0.35 0.08 67.78 23:14:01 3 25.82 0.00 2.21 0.00 0.08 71.89 23:14:01 4 20.41 0.00 1.74 0.07 0.10 77.68 23:14:01 5 20.80 0.00 1.85 0.05 0.08 77.22 23:14:01 6 14.44 0.00 1.79 6.04 0.08 77.65 23:14:01 7 22.85 0.00 2.06 1.12 0.08 73.89 23:15:01 all 8.91 0.00 2.36 2.74 0.07 85.91 23:15:01 0 10.26 0.00 2.61 1.39 0.07 85.67 23:15:01 1 16.64 0.00 3.37 0.52 0.07 79.41 23:15:01 2 4.86 0.00 2.01 1.16 0.07 91.90 23:15:01 3 7.99 0.00 1.88 1.86 0.08 88.19 23:15:01 4 9.19 0.00 2.33 3.17 0.05 85.25 23:15:01 5 9.16 0.00 2.79 3.43 0.10 84.52 23:15:01 6 6.13 0.00 1.85 9.80 0.07 82.16 23:15:01 7 7.06 0.00 2.06 0.65 0.07 90.16 23:16:01 all 3.75 0.00 0.38 0.11 0.06 95.70 23:16:01 0 4.09 0.00 0.37 0.00 0.07 95.48 23:16:01 1 3.69 0.00 0.63 0.00 0.07 95.61 23:16:01 2 4.19 0.00 0.28 0.00 0.05 95.48 23:16:01 3 3.16 0.00 0.42 0.13 0.08 96.21 23:16:01 4 3.76 0.00 0.35 0.02 0.05 95.82 23:16:01 5 4.56 0.00 0.37 0.00 0.08 94.99 23:16:01 6 4.68 0.00 0.43 0.67 0.07 94.15 23:16:01 7 1.87 0.00 0.20 0.02 0.03 97.88 23:17:01 all 1.74 0.00 0.58 0.09 0.06 97.54 23:17:01 0 1.49 0.00 0.54 0.10 0.07 97.81 23:17:01 1 1.79 0.00 0.55 0.03 0.03 97.59 23:17:01 2 3.21 0.00 0.43 0.05 0.03 96.28 23:17:01 3 1.40 0.00 0.58 0.23 0.05 97.73 23:17:01 4 1.35 0.00 0.64 0.07 0.05 97.89 23:17:01 5 1.14 0.00 0.64 0.23 0.07 97.93 23:17:01 6 2.16 0.00 0.70 0.02 0.08 97.04 23:17:01 7 1.35 0.00 0.55 0.02 0.05 98.03 Average: all 11.37 0.00 2.68 2.71 0.07 83.17 Average: 0 10.46 0.00 2.64 2.35 0.07 84.49 Average: 1 12.63 0.00 2.70 1.43 0.07 83.17 Average: 2 13.77 0.00 2.99 5.14 0.07 78.04 Average: 3 10.87 0.00 2.57 1.78 0.07 84.71 Average: 4 9.99 0.00 2.73 3.07 0.06 84.14 Average: 5 11.32 0.00 2.67 2.77 0.08 83.16 Average: 6 8.53 0.00 2.52 3.68 0.07 85.20 Average: 7 13.35 0.00 2.64 1.45 0.06 82.50