Triggered by Gerrit: https://gerrit.onap.org/r/c/policy/docker/+/141302 Running as SYSTEM [EnvInject] - Loading node environment variables. Building remotely on prd-ubuntu1804-docker-8c-8g-21635 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-pap-master-project-csit-verify-pap [ssh-agent] Looking for ssh-agent implementation... [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) $ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-aF43FOkWaTXe/agent.2023 SSH_AGENT_PID=2025 [ssh-agent] Started. Running ssh-add (command line suppressed) Identity added: /w/workspace/policy-pap-master-project-csit-verify-pap@tmp/private_key_15191052278812667052.key (/w/workspace/policy-pap-master-project-csit-verify-pap@tmp/private_key_15191052278812667052.key) [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) The recommended git tool is: NONE using credential onap-jenkins-ssh Wiping out workspace first. Cloning the remote Git repository Cloning repository git://cloud.onap.org/mirror/policy/docker.git > git init /w/workspace/policy-pap-master-project-csit-verify-pap # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git > git --version # timeout=10 > git --version # 'git version 2.17.1' using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git refs/changes/02/141302/1 # timeout=30 > git rev-parse ed38a50541249063daf2cfb00b312fb173adeace^{commit} # timeout=10 JENKINS-19022: warning: possible memory leak due to Git plugin usage; see: https://plugins.jenkins.io/git/#remove-git-plugin-buildsbybranch-builddata-script Checking out Revision ed38a50541249063daf2cfb00b312fb173adeace (refs/changes/02/141302/1) > git config core.sparsecheckout # timeout=10 > git checkout -f ed38a50541249063daf2cfb00b312fb173adeace # timeout=30 Commit message: "Remove python from the java app docker images" > git rev-parse FETCH_HEAD^{commit} # timeout=10 > git rev-list --no-walk 473f78ecac5fb75e5968b31a5bab95eaba72c803 # timeout=10 provisioning config files... copy managed file [npmrc] to file:/home/jenkins/.npmrc copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf [policy-pap-master-project-csit-verify-pap] $ /bin/bash /tmp/jenkins540439845137485074.sh ---> python-tools-install.sh Setup pyenv: * system (set by /opt/pyenv/version) * 3.8.13 (set by /opt/pyenv/version) * 3.9.13 (set by /opt/pyenv/version) * 3.10.6 (set by /opt/pyenv/version) lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-DwzS lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-DwzS/bin to PATH Generating Requirements File Python 3.10.6 pip 25.1.1 from /tmp/venv-DwzS/lib/python3.10/site-packages/pip (python 3.10) appdirs==1.4.4 argcomplete==3.6.2 aspy.yaml==1.3.0 attrs==25.3.0 autopage==0.5.2 beautifulsoup4==4.13.4 boto3==1.38.36 botocore==1.38.36 bs4==0.0.2 cachetools==5.5.2 certifi==2025.6.15 cffi==1.17.1 cfgv==3.4.0 chardet==5.2.0 charset-normalizer==3.4.2 click==8.2.1 cliff==4.10.0 cmd2==2.6.1 cryptography==3.3.2 debtcollector==3.0.0 decorator==5.2.1 defusedxml==0.7.1 Deprecated==1.2.18 distlib==0.3.9 dnspython==2.7.0 docker==7.1.0 dogpile.cache==1.4.0 durationpy==0.10 email_validator==2.2.0 filelock==3.18.0 future==1.0.0 gitdb==4.0.12 GitPython==3.1.44 google-auth==2.40.3 httplib2==0.22.0 identify==2.6.12 idna==3.10 importlib-resources==1.5.0 iso8601==2.1.0 Jinja2==3.1.6 jmespath==1.0.1 jsonpatch==1.33 jsonpointer==3.0.0 jsonschema==4.24.0 jsonschema-specifications==2025.4.1 keystoneauth1==5.11.1 kubernetes==33.1.0 lftools==0.37.13 lxml==5.4.0 MarkupSafe==3.0.2 msgpack==1.1.1 multi_key_dict==2.0.3 munch==4.0.0 netaddr==1.3.0 niet==1.4.2 nodeenv==1.9.1 oauth2client==4.1.3 oauthlib==3.2.2 openstacksdk==4.6.0 os-client-config==2.1.0 os-service-types==1.7.0 osc-lib==4.0.2 oslo.config==9.8.0 oslo.context==6.0.0 oslo.i18n==6.5.1 oslo.log==7.1.0 oslo.serialization==5.7.0 oslo.utils==9.0.0 packaging==25.0 pbr==6.1.1 platformdirs==4.3.8 prettytable==3.16.0 psutil==7.0.0 pyasn1==0.6.1 pyasn1_modules==0.4.2 pycparser==2.22 pygerrit2==2.0.15 PyGithub==2.6.1 PyJWT==2.10.1 PyNaCl==1.5.0 pyparsing==2.4.7 pyperclip==1.9.0 pyrsistent==0.20.0 python-cinderclient==9.7.0 python-dateutil==2.9.0.post0 python-heatclient==4.2.0 python-jenkins==1.8.2 python-keystoneclient==5.6.0 python-magnumclient==4.8.1 python-openstackclient==8.1.0 python-swiftclient==4.8.0 PyYAML==6.0.2 referencing==0.36.2 requests==2.32.4 requests-oauthlib==2.0.0 requestsexceptions==1.4.0 rfc3986==2.0.0 rpds-py==0.25.1 rsa==4.9.1 ruamel.yaml==0.18.14 ruamel.yaml.clib==0.2.12 s3transfer==0.13.0 simplejson==3.20.1 six==1.17.0 smmap==5.0.2 soupsieve==2.7 stevedore==5.4.1 tabulate==0.9.0 toml==0.10.2 tomlkit==0.13.3 tqdm==4.67.1 typing_extensions==4.14.0 tzdata==2025.2 urllib3==1.26.20 virtualenv==20.31.2 wcwidth==0.2.13 websocket-client==1.8.0 wrapt==1.17.2 xdg==6.0.0 xmltodict==0.14.2 yq==3.4.3 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SET_JDK_VERSION=openjdk17 GIT_URL="git://cloud.onap.org/mirror" [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-verify-pap] $ /bin/sh /tmp/jenkins5399758174120906888.sh ---> update-java-alternatives.sh ---> Updating Java version ---> Ubuntu/Debian system detected update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode openjdk version "17.0.4" 2022-07-19 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-verify-pap] $ /bin/sh -xe /tmp/jenkins411628167843063326.sh + /w/workspace/policy-pap-master-project-csit-verify-pap/csit/run-project-csit.sh pap WARNING! Using --password via the CLI is insecure. Use --password-stdin. WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded docker: 'compose' is not a docker command. See 'docker --help' Docker Compose Plugin not installed. Installing now... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 20 60.2M 20 12.5M 0 0 28.1M 0 0:00:02 --:--:-- 0:00:02 28.1M 100 60.2M 100 60.2M 0 0 59.1M 0 0:00:01 0:00:01 --:--:-- 83.3M Setting project configuration for: pap Configuring docker compose... Starting apex-pdp using postgres + Grafana/Prometheus api Pulling postgres Pulling grafana Pulling apex-pdp Pulling policy-db-migrator Pulling pap Pulling prometheus Pulling simulator Pulling kafka Pulling zookeeper Pulling da9db072f522 Pulling fs layer 19ede2622bd6 Pulling fs layer 81f92f6326a0 Pulling fs layer 774184111a51 Pulling fs layer ba3bfa42d232 Pulling fs layer 8e7191d1a9d6 Pulling fs layer 43449fa9f0bf Pulling fs layer 25fd4437207e Pulling fs layer 43449fa9f0bf Waiting 25fd4437207e Waiting 774184111a51 Waiting ba3bfa42d232 Waiting 8e7191d1a9d6 Waiting da9db072f522 Pulling fs layer 4ba79830ebce Pulling fs layer d223479d7367 Pulling fs layer ece604b40811 Pulling fs layer c01e672f2391 Pulling fs layer d223479d7367 Waiting c01e672f2391 Waiting 4ba79830ebce Waiting ece604b40811 Waiting da9db072f522 Pulling fs layer 96e38c8865ba Pulling fs layer e5d7009d9e55 Pulling fs layer 1ec5fb03eaee Pulling fs layer d3165a332ae3 Pulling fs layer c124ba1a8b26 Pulling fs layer 6394804c2196 Pulling fs layer d3165a332ae3 Waiting 96e38c8865ba Waiting 1ec5fb03eaee Waiting e5d7009d9e55 Waiting c124ba1a8b26 Waiting 6394804c2196 Waiting da9db072f522 Downloading [> ] 48.06kB/3.624MB da9db072f522 Pulling fs layer 96e38c8865ba Pulling fs layer 5e06c6bed798 Pulling fs layer 684be6598fc9 Pulling fs layer 0d92cad902ba Pulling fs layer da9db072f522 Downloading [> ] 48.06kB/3.624MB 96e38c8865ba Waiting da9db072f522 Downloading [> ] 48.06kB/3.624MB 5e06c6bed798 Waiting dcc0c3b2850c Pulling fs layer eb7cda286a15 Pulling fs layer 684be6598fc9 Waiting dcc0c3b2850c Waiting 0d92cad902ba Waiting da9db072f522 Downloading [> ] 48.06kB/3.624MB eb7cda286a15 Waiting 81f92f6326a0 Downloading [> ] 146.4kB/14.63MB 19ede2622bd6 Downloading [> ] 539.6kB/71.91MB da9db072f522 Pulling fs layer e0a9246a993d Pulling fs layer 5179ab305f38 Pulling fs layer 18ce86a3284e Pulling fs layer 098efa8b34b7 Pulling fs layer 614e034e242f Pulling fs layer e0a9246a993d Waiting da9db072f522 Downloading [> ] 48.06kB/3.624MB 5179ab305f38 Waiting 18ce86a3284e Waiting 098efa8b34b7 Waiting 614e034e242f Waiting 1e017ebebdbd Pulling fs layer 55f2b468da67 Pulling fs layer 82bfc142787e Pulling fs layer 46baca71a4ef Pulling fs layer b0e0ef7895f4 Pulling fs layer c0c90eeb8aca Pulling fs layer 5cfb27c10ea5 Pulling fs layer 40a5eed61bb0 Pulling fs layer e040ea11fa10 Pulling fs layer 09d5a3f70313 Pulling fs layer 356f5c2c843b Pulling fs layer 46baca71a4ef Waiting b0e0ef7895f4 Waiting c0c90eeb8aca Waiting 09d5a3f70313 Waiting 5cfb27c10ea5 Waiting 40a5eed61bb0 Waiting 356f5c2c843b Waiting e040ea11fa10 Waiting 82bfc142787e Waiting 1e017ebebdbd Waiting 9fa9226be034 Pulling fs layer 1617e25568b2 Pulling fs layer 6ac0e4adf315 Pulling fs layer f3b09c502777 Pulling fs layer 408012a7b118 Pulling fs layer 44986281b8b9 Pulling fs layer bf70c5107ab5 Pulling fs layer 1ccde423731d Pulling fs layer 7221d93db8a9 Pulling fs layer 7df673c7455d Pulling fs layer f3b09c502777 Waiting 408012a7b118 Waiting 9fa9226be034 Waiting 1617e25568b2 Waiting 44986281b8b9 Waiting 7221d93db8a9 Waiting 6ac0e4adf315 Waiting 7df673c7455d Waiting 1ccde423731d Waiting bf70c5107ab5 Waiting eca0188f477e Pulling fs layer e444bcd4d577 Pulling fs layer eabd8714fec9 Pulling fs layer 45fd2fec8a19 Pulling fs layer 8f10199ed94b Pulling fs layer f963a77d2726 Pulling fs layer f3a82e9f1761 Pulling fs layer 79161a3f5362 Pulling fs layer 9c266ba63f51 Pulling fs layer 2e8a7df9c2ee Pulling fs layer 10f05dd8b1db Pulling fs layer 41dac8b43ba6 Pulling fs layer 71a9f6a9ab4d Pulling fs layer da3ed5db7103 Pulling fs layer c955f6e31a04 Pulling fs layer eca0188f477e Waiting 9c266ba63f51 Waiting 2e8a7df9c2ee Waiting 10f05dd8b1db Waiting 41dac8b43ba6 Waiting 71a9f6a9ab4d Waiting da3ed5db7103 Waiting e444bcd4d577 Waiting c955f6e31a04 Waiting eabd8714fec9 Waiting 45fd2fec8a19 Waiting 8f10199ed94b Waiting f3a82e9f1761 Waiting 79161a3f5362 Waiting f963a77d2726 Waiting f18232174bc9 Pulling fs layer e60d9caeb0b8 Pulling fs layer f61a19743345 Pulling fs layer 8af57d8c9f49 Pulling fs layer c53a11b7c6fc Pulling fs layer e032d0a5e409 Pulling fs layer c49e0ee60bfb Pulling fs layer 384497dbce3b Pulling fs layer 055b9255fa03 Pulling fs layer b176d7edde70 Pulling fs layer 384497dbce3b Waiting e032d0a5e409 Waiting 055b9255fa03 Waiting c49e0ee60bfb Waiting e60d9caeb0b8 Waiting f18232174bc9 Waiting f61a19743345 Waiting b176d7edde70 Waiting 8af57d8c9f49 Waiting c53a11b7c6fc Waiting da9db072f522 Download complete da9db072f522 Download complete da9db072f522 Download complete da9db072f522 Download complete da9db072f522 Download complete da9db072f522 Extracting [> ] 65.54kB/3.624MB da9db072f522 Extracting [> ] 65.54kB/3.624MB da9db072f522 Extracting [> ] 65.54kB/3.624MB 2d429b9e73a6 Pulling fs layer 46eab5b44a35 Pulling fs layer c4d302cc468d Pulling fs layer da9db072f522 Extracting [> ] 65.54kB/3.624MB da9db072f522 Extracting [> ] 65.54kB/3.624MB 01e0882c90d9 Pulling fs layer 531ee2cf3c0c Pulling fs layer ed54a7dee1d8 Pulling fs layer 12c5c803443f Pulling fs layer e27c75a98748 Pulling fs layer e73cb4a42719 Pulling fs layer a83b68436f09 Pulling fs layer 46eab5b44a35 Waiting 787d6bee9571 Pulling fs layer 13ff0988aaea Pulling fs layer 4b82842ab819 Pulling fs layer c4d302cc468d Waiting 7e568a0dc8fb Pulling fs layer 01e0882c90d9 Waiting 531ee2cf3c0c Waiting ed54a7dee1d8 Waiting 12c5c803443f Waiting 2d429b9e73a6 Waiting e27c75a98748 Waiting e73cb4a42719 Waiting a83b68436f09 Waiting 787d6bee9571 Waiting 4b82842ab819 Waiting 7e568a0dc8fb Waiting 13ff0988aaea Waiting 774184111a51 Downloading [==================================================>] 1.074kB/1.074kB 774184111a51 Verifying Checksum 774184111a51 Download complete ba3bfa42d232 Downloading [============================> ] 3.003kB/5.244kB ba3bfa42d232 Downloading [==================================================>] 5.244kB/5.244kB ba3bfa42d232 Verifying Checksum ba3bfa42d232 Download complete 8e7191d1a9d6 Downloading [==================================================>] 1.037kB/1.037kB 8e7191d1a9d6 Verifying Checksum 8e7191d1a9d6 Download complete 81f92f6326a0 Downloading [==================================> ] 10.17MB/14.63MB 43449fa9f0bf Download complete 19ede2622bd6 Downloading [========> ] 12.43MB/71.91MB 81f92f6326a0 Verifying Checksum 81f92f6326a0 Download complete 25fd4437207e Downloading [=======> ] 3.002kB/19.52kB 25fd4437207e Downloading [==================================================>] 19.52kB/19.52kB 25fd4437207e Verifying Checksum 25fd4437207e Download complete d223479d7367 Downloading [> ] 80.82kB/6.742MB 4ba79830ebce Downloading [> ] 539.6kB/166.8MB da9db072f522 Extracting [=====================> ] 1.573MB/3.624MB da9db072f522 Extracting [=====================> ] 1.573MB/3.624MB da9db072f522 Extracting [=====================> ] 1.573MB/3.624MB da9db072f522 Extracting [=====================> ] 1.573MB/3.624MB da9db072f522 Extracting [=====================> ] 1.573MB/3.624MB da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB 19ede2622bd6 Downloading [===================> ] 28.11MB/71.91MB d223479d7367 Downloading [======================================> ] 5.16MB/6.742MB 4ba79830ebce Downloading [==> ] 9.19MB/166.8MB d223479d7367 Verifying Checksum d223479d7367 Download complete da9db072f522 Pull complete da9db072f522 Pull complete da9db072f522 Pull complete da9db072f522 Pull complete da9db072f522 Pull complete ece604b40811 Downloading [==================================================>] 303B/303B ece604b40811 Download complete 19ede2622bd6 Downloading [==============================> ] 43.79MB/71.91MB c01e672f2391 Downloading [> ] 539.6kB/263.6MB 4ba79830ebce Downloading [======> ] 20.54MB/166.8MB 19ede2622bd6 Downloading [==========================================> ] 61.09MB/71.91MB c01e672f2391 Downloading [=> ] 5.946MB/263.6MB 4ba79830ebce Downloading [==========> ] 36.22MB/166.8MB 19ede2622bd6 Verifying Checksum 19ede2622bd6 Download complete 96e38c8865ba Downloading [> ] 539.6kB/71.91MB 96e38c8865ba Downloading [> ] 539.6kB/71.91MB c01e672f2391 Downloading [==> ] 15.14MB/263.6MB 4ba79830ebce Downloading [===============> ] 52.44MB/166.8MB 19ede2622bd6 Extracting [> ] 557.1kB/71.91MB 96e38c8865ba Downloading [==> ] 3.784MB/71.91MB 96e38c8865ba Downloading [==> ] 3.784MB/71.91MB c01e672f2391 Downloading [=====> ] 29.2MB/263.6MB 4ba79830ebce Downloading [====================> ] 68.66MB/166.8MB 19ede2622bd6 Extracting [===> ] 5.014MB/71.91MB 96e38c8865ba Downloading [====> ] 7.028MB/71.91MB 96e38c8865ba Downloading [====> ] 7.028MB/71.91MB c01e672f2391 Downloading [=======> ] 42.17MB/263.6MB 4ba79830ebce Downloading [=========================> ] 84.34MB/166.8MB 19ede2622bd6 Extracting [=======> ] 10.58MB/71.91MB 96e38c8865ba Downloading [=======> ] 10.81MB/71.91MB 96e38c8865ba Downloading [=======> ] 10.81MB/71.91MB c01e672f2391 Downloading [==========> ] 55.69MB/263.6MB 4ba79830ebce Downloading [==============================> ] 101.1MB/166.8MB 19ede2622bd6 Extracting [===========> ] 16.15MB/71.91MB 96e38c8865ba Downloading [============> ] 18.38MB/71.91MB 96e38c8865ba Downloading [============> ] 18.38MB/71.91MB c01e672f2391 Downloading [=============> ] 72.45MB/263.6MB 4ba79830ebce Downloading [===================================> ] 117.9MB/166.8MB 19ede2622bd6 Extracting [===============> ] 21.73MB/71.91MB 96e38c8865ba Downloading [=====================> ] 30.82MB/71.91MB 96e38c8865ba Downloading [=====================> ] 30.82MB/71.91MB c01e672f2391 Downloading [=================> ] 89.75MB/263.6MB 4ba79830ebce Downloading [========================================> ] 135.7MB/166.8MB 19ede2622bd6 Extracting [===================> ] 28.41MB/71.91MB 96e38c8865ba Downloading [===============================> ] 44.87MB/71.91MB 96e38c8865ba Downloading [===============================> ] 44.87MB/71.91MB c01e672f2391 Downloading [====================> ] 107.1MB/263.6MB 4ba79830ebce Downloading [=============================================> ] 152.5MB/166.8MB 19ede2622bd6 Extracting [========================> ] 35.09MB/71.91MB 4ba79830ebce Verifying Checksum 4ba79830ebce Download complete c01e672f2391 Downloading [=======================> ] 125.4MB/263.6MB 96e38c8865ba Downloading [==========================================> ] 61.64MB/71.91MB 96e38c8865ba Downloading [==========================================> ] 61.64MB/71.91MB e5d7009d9e55 Downloading [==================================================>] 295B/295B e5d7009d9e55 Download complete 19ede2622bd6 Extracting [============================> ] 40.67MB/71.91MB 1ec5fb03eaee Downloading [=> ] 3.001kB/127kB 1ec5fb03eaee Download complete 96e38c8865ba Verifying Checksum 96e38c8865ba Download complete 96e38c8865ba Download complete c01e672f2391 Downloading [==========================> ] 140.6MB/263.6MB d3165a332ae3 Downloading [==================================================>] 1.328kB/1.328kB d3165a332ae3 Download complete c124ba1a8b26 Downloading [> ] 539.6kB/91.87MB 6394804c2196 Downloading [==================================================>] 1.299kB/1.299kB 6394804c2196 Verifying Checksum 6394804c2196 Download complete 4ba79830ebce Extracting [> ] 557.1kB/166.8MB 5e06c6bed798 Downloading [==================================================>] 296B/296B 5e06c6bed798 Verifying Checksum 5e06c6bed798 Download complete 684be6598fc9 Downloading [=> ] 3.001kB/127.5kB 684be6598fc9 Downloading [==================================================>] 127.5kB/127.5kB 684be6598fc9 Verifying Checksum 684be6598fc9 Download complete 19ede2622bd6 Extracting [==============================> ] 44.01MB/71.91MB 96e38c8865ba Extracting [> ] 557.1kB/71.91MB 96e38c8865ba Extracting [> ] 557.1kB/71.91MB 0d92cad902ba Downloading [==================================================>] 1.148kB/1.148kB 0d92cad902ba Verifying Checksum 0d92cad902ba Download complete c01e672f2391 Downloading [=============================> ] 155.2MB/263.6MB dcc0c3b2850c Downloading [> ] 539.6kB/76.12MB c124ba1a8b26 Downloading [=> ] 2.702MB/91.87MB 4ba79830ebce Extracting [=> ] 4.456MB/166.8MB 19ede2622bd6 Extracting [================================> ] 47.35MB/71.91MB 96e38c8865ba Extracting [===> ] 4.456MB/71.91MB 96e38c8865ba Extracting [===> ] 4.456MB/71.91MB c01e672f2391 Downloading [================================> ] 170.3MB/263.6MB dcc0c3b2850c Downloading [===> ] 5.406MB/76.12MB c124ba1a8b26 Downloading [=====> ] 9.731MB/91.87MB 4ba79830ebce Extracting [=====> ] 17.83MB/166.8MB 19ede2622bd6 Extracting [===================================> ] 50.69MB/71.91MB 96e38c8865ba Extracting [======> ] 8.913MB/71.91MB 96e38c8865ba Extracting [======> ] 8.913MB/71.91MB c01e672f2391 Downloading [===================================> ] 185.4MB/263.6MB dcc0c3b2850c Downloading [=========> ] 14.06MB/76.12MB 4ba79830ebce Extracting [=========> ] 32.31MB/166.8MB c124ba1a8b26 Downloading [==========> ] 18.38MB/91.87MB 19ede2622bd6 Extracting [=====================================> ] 54.59MB/71.91MB 96e38c8865ba Extracting [========> ] 12.81MB/71.91MB 96e38c8865ba Extracting [========> ] 12.81MB/71.91MB c01e672f2391 Downloading [======================================> ] 202.2MB/263.6MB dcc0c3b2850c Downloading [=================> ] 27.03MB/76.12MB 4ba79830ebce Extracting [============> ] 42.89MB/166.8MB c124ba1a8b26 Downloading [===============> ] 28.11MB/91.87MB 19ede2622bd6 Extracting [========================================> ] 57.93MB/71.91MB c01e672f2391 Downloading [=========================================> ] 217.3MB/263.6MB 96e38c8865ba Extracting [===========> ] 16.15MB/71.91MB 96e38c8865ba Extracting [===========> ] 16.15MB/71.91MB dcc0c3b2850c Downloading [=============================> ] 44.33MB/76.12MB 4ba79830ebce Extracting [=================> ] 56.82MB/166.8MB c124ba1a8b26 Downloading [====================> ] 38.39MB/91.87MB 19ede2622bd6 Extracting [==========================================> ] 61.83MB/71.91MB c01e672f2391 Downloading [============================================> ] 233MB/263.6MB 96e38c8865ba Extracting [===============> ] 21.73MB/71.91MB 96e38c8865ba Extracting [===============> ] 21.73MB/71.91MB dcc0c3b2850c Downloading [=======================================> ] 60.01MB/76.12MB 4ba79830ebce Extracting [===================> ] 65.73MB/166.8MB c124ba1a8b26 Downloading [==========================> ] 48.66MB/91.87MB c01e672f2391 Downloading [===============================================> ] 248.7MB/263.6MB 19ede2622bd6 Extracting [==============================================> ] 66.29MB/71.91MB 96e38c8865ba Extracting [==================> ] 26.18MB/71.91MB 96e38c8865ba Extracting [==================> ] 26.18MB/71.91MB dcc0c3b2850c Downloading [=================================================> ] 75.15MB/76.12MB dcc0c3b2850c Verifying Checksum dcc0c3b2850c Download complete c124ba1a8b26 Downloading [===============================> ] 58.39MB/91.87MB 4ba79830ebce Extracting [=======================> ] 79.1MB/166.8MB eb7cda286a15 Downloading [==================================================>] 1.119kB/1.119kB eb7cda286a15 Verifying Checksum eb7cda286a15 Download complete c01e672f2391 Verifying Checksum c01e672f2391 Download complete 19ede2622bd6 Extracting [================================================> ] 70.19MB/71.91MB 96e38c8865ba Extracting [====================> ] 30.08MB/71.91MB 96e38c8865ba Extracting [====================> ] 30.08MB/71.91MB 5179ab305f38 Downloading [==================================================>] 306B/306B 5179ab305f38 Verifying Checksum 5179ab305f38 Download complete e0a9246a993d Downloading [> ] 539.6kB/71.91MB c124ba1a8b26 Downloading [=====================================> ] 68.12MB/91.87MB 18ce86a3284e Downloading [> ] 539.6kB/182.3MB 4ba79830ebce Extracting [==========================> ] 87.46MB/166.8MB 19ede2622bd6 Extracting [==================================================>] 71.91MB/71.91MB e0a9246a993d Downloading [======> ] 9.19MB/71.91MB 96e38c8865ba Extracting [=======================> ] 33.42MB/71.91MB 96e38c8865ba Extracting [=======================> ] 33.42MB/71.91MB c124ba1a8b26 Downloading [==============================================> ] 84.88MB/91.87MB 18ce86a3284e Downloading [==> ] 9.19MB/182.3MB 4ba79830ebce Extracting [===========================> ] 90.8MB/166.8MB c124ba1a8b26 Verifying Checksum c124ba1a8b26 Download complete 19ede2622bd6 Pull complete 098efa8b34b7 Downloading [==================================================>] 1.154kB/1.154kB 098efa8b34b7 Verifying Checksum 098efa8b34b7 Download complete 81f92f6326a0 Extracting [> ] 163.8kB/14.63MB 614e034e242f Downloading [==================================================>] 1.126kB/1.126kB 614e034e242f Download complete e0a9246a993d Downloading [=============> ] 20MB/71.91MB 96e38c8865ba Extracting [=========================> ] 36.77MB/71.91MB 96e38c8865ba Extracting [=========================> ] 36.77MB/71.91MB 1e017ebebdbd Downloading [> ] 375.7kB/37.19MB 18ce86a3284e Downloading [======> ] 23.79MB/182.3MB 4ba79830ebce Extracting [============================> ] 95.26MB/166.8MB 81f92f6326a0 Extracting [=> ] 327.7kB/14.63MB e0a9246a993d Downloading [======================> ] 32.98MB/71.91MB 1e017ebebdbd Downloading [====> ] 3.39MB/37.19MB 96e38c8865ba Extracting [===========================> ] 40.11MB/71.91MB 96e38c8865ba Extracting [===========================> ] 40.11MB/71.91MB 18ce86a3284e Downloading [==========> ] 39.47MB/182.3MB 4ba79830ebce Extracting [=============================> ] 98.6MB/166.8MB 81f92f6326a0 Extracting [================> ] 4.751MB/14.63MB e0a9246a993d Downloading [=================================> ] 48.12MB/71.91MB 1e017ebebdbd Downloading [============> ] 9.043MB/37.19MB 18ce86a3284e Downloading [===============> ] 55.69MB/182.3MB 96e38c8865ba Extracting [==============================> ] 43.45MB/71.91MB 96e38c8865ba Extracting [==============================> ] 43.45MB/71.91MB 4ba79830ebce Extracting [===============================> ] 103.6MB/166.8MB 81f92f6326a0 Extracting [=======================> ] 6.881MB/14.63MB e0a9246a993d Downloading [==========================================> ] 61.64MB/71.91MB 1e017ebebdbd Downloading [=========================> ] 18.84MB/37.19MB 18ce86a3284e Downloading [==================> ] 69.2MB/182.3MB 96e38c8865ba Extracting [================================> ] 46.79MB/71.91MB 96e38c8865ba Extracting [================================> ] 46.79MB/71.91MB 4ba79830ebce Extracting [================================> ] 108.1MB/166.8MB e0a9246a993d Verifying Checksum e0a9246a993d Download complete 81f92f6326a0 Extracting [=============================> ] 8.684MB/14.63MB 1e017ebebdbd Downloading [=======================================> ] 29.39MB/37.19MB 18ce86a3284e Downloading [======================> ] 83.8MB/182.3MB 55f2b468da67 Downloading [> ] 539.6kB/257.9MB 96e38c8865ba Extracting [===================================> ] 50.69MB/71.91MB 96e38c8865ba Extracting [===================================> ] 50.69MB/71.91MB 4ba79830ebce Extracting [=================================> ] 110.9MB/166.8MB e0a9246a993d Extracting [> ] 557.1kB/71.91MB 1e017ebebdbd Verifying Checksum 1e017ebebdbd Download complete 81f92f6326a0 Extracting [=======================================> ] 11.63MB/14.63MB 82bfc142787e Downloading [> ] 97.22kB/8.613MB 18ce86a3284e Downloading [==========================> ] 96.24MB/182.3MB 55f2b468da67 Downloading [> ] 3.243MB/257.9MB 96e38c8865ba Extracting [====================================> ] 52.92MB/71.91MB 96e38c8865ba Extracting [====================================> ] 52.92MB/71.91MB 4ba79830ebce Extracting [==================================> ] 114.2MB/166.8MB 81f92f6326a0 Extracting [=============================================> ] 13.27MB/14.63MB 82bfc142787e Downloading [==============================================> ] 8.06MB/8.613MB 81f92f6326a0 Extracting [==================================================>] 14.63MB/14.63MB 82bfc142787e Verifying Checksum 82bfc142787e Download complete e0a9246a993d Extracting [===> ] 4.456MB/71.91MB 1e017ebebdbd Extracting [> ] 393.2kB/37.19MB 18ce86a3284e Downloading [=============================> ] 109.2MB/182.3MB 46baca71a4ef Downloading [========> ] 3.01kB/18.11kB 46baca71a4ef Downloading [==================================================>] 18.11kB/18.11kB 46baca71a4ef Verifying Checksum 46baca71a4ef Download complete 55f2b468da67 Downloading [==> ] 11.89MB/257.9MB b0e0ef7895f4 Downloading [> ] 375.7kB/37.01MB 96e38c8865ba Extracting [======================================> ] 55.71MB/71.91MB 96e38c8865ba Extracting [======================================> ] 55.71MB/71.91MB 81f92f6326a0 Pull complete 774184111a51 Extracting [==================================================>] 1.074kB/1.074kB 774184111a51 Extracting [==================================================>] 1.074kB/1.074kB 4ba79830ebce Extracting [===================================> ] 117MB/166.8MB 1e017ebebdbd Extracting [====> ] 3.539MB/37.19MB e0a9246a993d Extracting [=====> ] 8.356MB/71.91MB 18ce86a3284e Downloading [================================> ] 120MB/182.3MB 55f2b468da67 Downloading [====> ] 22.71MB/257.9MB 96e38c8865ba Extracting [========================================> ] 58.49MB/71.91MB 96e38c8865ba Extracting [========================================> ] 58.49MB/71.91MB b0e0ef7895f4 Downloading [=====> ] 4.144MB/37.01MB 4ba79830ebce Extracting [===================================> ] 119.2MB/166.8MB 1e017ebebdbd Extracting [========> ] 6.685MB/37.19MB e0a9246a993d Extracting [=======> ] 11.14MB/71.91MB 18ce86a3284e Downloading [=====================================> ] 136.2MB/182.3MB 55f2b468da67 Downloading [=======> ] 39.47MB/257.9MB 774184111a51 Pull complete ba3bfa42d232 Extracting [==================================================>] 5.244kB/5.244kB ba3bfa42d232 Extracting [==================================================>] 5.244kB/5.244kB 96e38c8865ba Extracting [===========================================> ] 62.39MB/71.91MB 96e38c8865ba Extracting [===========================================> ] 62.39MB/71.91MB b0e0ef7895f4 Downloading [===============> ] 11.68MB/37.01MB 18ce86a3284e Downloading [========================================> ] 149.2MB/182.3MB 55f2b468da67 Downloading [=========> ] 50.82MB/257.9MB e0a9246a993d Extracting [=========> ] 13.93MB/71.91MB 1e017ebebdbd Extracting [=============> ] 9.83MB/37.19MB 4ba79830ebce Extracting [====================================> ] 122.6MB/166.8MB b0e0ef7895f4 Downloading [================================> ] 24.12MB/37.01MB 96e38c8865ba Extracting [=============================================> ] 65.73MB/71.91MB 96e38c8865ba Extracting [=============================================> ] 65.73MB/71.91MB 18ce86a3284e Downloading [===========================================> ] 159.5MB/182.3MB 55f2b468da67 Downloading [============> ] 64.88MB/257.9MB 1e017ebebdbd Extracting [===================> ] 14.16MB/37.19MB 4ba79830ebce Extracting [=====================================> ] 125.3MB/166.8MB e0a9246a993d Extracting [============> ] 17.27MB/71.91MB ba3bfa42d232 Pull complete 8e7191d1a9d6 Extracting [==================================================>] 1.037kB/1.037kB 8e7191d1a9d6 Extracting [==================================================>] 1.037kB/1.037kB b0e0ef7895f4 Downloading [=============================================> ] 33.91MB/37.01MB 96e38c8865ba Extracting [=================================================> ] 71.3MB/71.91MB 96e38c8865ba Extracting [=================================================> ] 71.3MB/71.91MB b0e0ef7895f4 Verifying Checksum b0e0ef7895f4 Download complete 18ce86a3284e Downloading [===============================================> ] 174.1MB/182.3MB 96e38c8865ba Extracting [==================================================>] 71.91MB/71.91MB 96e38c8865ba Extracting [==================================================>] 71.91MB/71.91MB 55f2b468da67 Downloading [===============> ] 80.56MB/257.9MB 1e017ebebdbd Extracting [======================> ] 16.91MB/37.19MB 4ba79830ebce Extracting [======================================> ] 127.6MB/166.8MB e0a9246a993d Extracting [==============> ] 20.61MB/71.91MB c0c90eeb8aca Downloading [==================================================>] 1.105kB/1.105kB c0c90eeb8aca Verifying Checksum c0c90eeb8aca Download complete 18ce86a3284e Verifying Checksum 18ce86a3284e Download complete 8e7191d1a9d6 Pull complete 43449fa9f0bf Extracting [==================================================>] 1.037kB/1.037kB 43449fa9f0bf Extracting [==================================================>] 1.037kB/1.037kB 40a5eed61bb0 Downloading [==================================================>] 98B/98B 40a5eed61bb0 Verifying Checksum 40a5eed61bb0 Download complete 5cfb27c10ea5 Downloading [==================================================>] 852B/852B 5cfb27c10ea5 Verifying Checksum 5cfb27c10ea5 Download complete 55f2b468da67 Downloading [=================> ] 92.45MB/257.9MB 96e38c8865ba Pull complete 96e38c8865ba Pull complete 1e017ebebdbd Extracting [===========================> ] 20.45MB/37.19MB e040ea11fa10 Downloading [==================================================>] 173B/173B e040ea11fa10 Verifying Checksum e040ea11fa10 Download complete 5e06c6bed798 Extracting [==================================================>] 296B/296B e5d7009d9e55 Extracting [==================================================>] 295B/295B e0a9246a993d Extracting [=================> ] 24.51MB/71.91MB 5e06c6bed798 Extracting [==================================================>] 296B/296B e5d7009d9e55 Extracting [==================================================>] 295B/295B 4ba79830ebce Extracting [=======================================> ] 132MB/166.8MB 356f5c2c843b Downloading [=========================================> ] 3.011kB/3.623kB 356f5c2c843b Downloading [==================================================>] 3.623kB/3.623kB 356f5c2c843b Verifying Checksum 356f5c2c843b Download complete 09d5a3f70313 Downloading [> ] 539.6kB/109.2MB 9fa9226be034 Downloading [> ] 15.3kB/783kB 9fa9226be034 Downloading [==================================================>] 783kB/783kB 9fa9226be034 Download complete 9fa9226be034 Extracting [==> ] 32.77kB/783kB 55f2b468da67 Downloading [====================> ] 106.5MB/257.9MB 1617e25568b2 Downloading [=> ] 15.3kB/480.9kB 1617e25568b2 Download complete e0a9246a993d Extracting [===================> ] 28.41MB/71.91MB 1e017ebebdbd Extracting [===============================> ] 23.59MB/37.19MB 09d5a3f70313 Downloading [==> ] 4.865MB/109.2MB 4ba79830ebce Extracting [========================================> ] 135.4MB/166.8MB 43449fa9f0bf Pull complete 25fd4437207e Extracting [==================================================>] 19.52kB/19.52kB 25fd4437207e Extracting [==================================================>] 19.52kB/19.52kB e5d7009d9e55 Pull complete 1ec5fb03eaee Extracting [============> ] 32.77kB/127kB 1ec5fb03eaee Extracting [==================================================>] 127kB/127kB 6ac0e4adf315 Downloading [> ] 539.6kB/62.07MB 5e06c6bed798 Pull complete 1ec5fb03eaee Extracting [==================================================>] 127kB/127kB 684be6598fc9 Extracting [============> ] 32.77kB/127.5kB 684be6598fc9 Extracting [==================================================>] 127.5kB/127.5kB 9fa9226be034 Extracting [=============================> ] 458.8kB/783kB 9fa9226be034 Extracting [==================================================>] 783kB/783kB 9fa9226be034 Extracting [==================================================>] 783kB/783kB 55f2b468da67 Downloading [=======================> ] 123.3MB/257.9MB 1e017ebebdbd Extracting [====================================> ] 27.13MB/37.19MB 09d5a3f70313 Downloading [=====> ] 12.98MB/109.2MB e0a9246a993d Extracting [======================> ] 31.75MB/71.91MB 4ba79830ebce Extracting [=========================================> ] 137.6MB/166.8MB 9fa9226be034 Pull complete 1617e25568b2 Extracting [===> ] 32.77kB/480.9kB 6ac0e4adf315 Downloading [===> ] 4.324MB/62.07MB 55f2b468da67 Downloading [==========================> ] 134.6MB/257.9MB 1ec5fb03eaee Pull complete 684be6598fc9 Pull complete 0d92cad902ba Extracting [==================================================>] 1.148kB/1.148kB 0d92cad902ba Extracting [==================================================>] 1.148kB/1.148kB 09d5a3f70313 Downloading [=========> ] 20MB/109.2MB d3165a332ae3 Extracting [==================================================>] 1.328kB/1.328kB d3165a332ae3 Extracting [==================================================>] 1.328kB/1.328kB 1e017ebebdbd Extracting [=======================================> ] 29.1MB/37.19MB e0a9246a993d Extracting [========================> ] 34.54MB/71.91MB 4ba79830ebce Extracting [==========================================> ] 140.4MB/166.8MB 1617e25568b2 Extracting [==================================> ] 327.7kB/480.9kB 25fd4437207e Pull complete 55f2b468da67 Downloading [=============================> ] 150.3MB/257.9MB policy-db-migrator Pulled 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB 1e017ebebdbd Extracting [===========================================> ] 32.64MB/37.19MB 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB 4ba79830ebce Extracting [===========================================> ] 143.7MB/166.8MB e0a9246a993d Extracting [=========================> ] 37.32MB/71.91MB 55f2b468da67 Downloading [===============================> ] 163.8MB/257.9MB 0d92cad902ba Pull complete d3165a332ae3 Pull complete 1617e25568b2 Pull complete e0a9246a993d Extracting [===========================> ] 40.11MB/71.91MB 1e017ebebdbd Extracting [=============================================> ] 34.21MB/37.19MB 4ba79830ebce Extracting [============================================> ] 148.7MB/166.8MB 55f2b468da67 Downloading [==================================> ] 179.5MB/257.9MB dcc0c3b2850c Extracting [> ] 557.1kB/76.12MB c124ba1a8b26 Extracting [> ] 557.1kB/91.87MB e0a9246a993d Extracting [==============================> ] 44.01MB/71.91MB 4ba79830ebce Extracting [=============================================> ] 152.6MB/166.8MB 1e017ebebdbd Extracting [================================================> ] 36.18MB/37.19MB 55f2b468da67 Downloading [=====================================> ] 191.4MB/257.9MB 1e017ebebdbd Extracting [==================================================>] 37.19MB/37.19MB dcc0c3b2850c Extracting [=====> ] 8.356MB/76.12MB c124ba1a8b26 Extracting [=====> ] 9.47MB/91.87MB 1e017ebebdbd Pull complete e0a9246a993d Extracting [================================> ] 46.79MB/71.91MB 4ba79830ebce Extracting [==============================================> ] 155.4MB/166.8MB 55f2b468da67 Downloading [========================================> ] 206.5MB/257.9MB dcc0c3b2850c Extracting [==========> ] 16.15MB/76.12MB c124ba1a8b26 Extracting [===========> ] 21.73MB/91.87MB 4ba79830ebce Extracting [===============================================> ] 158.8MB/166.8MB 55f2b468da67 Downloading [==========================================> ] 219MB/257.9MB e0a9246a993d Extracting [==================================> ] 49.58MB/71.91MB dcc0c3b2850c Extracting [===============> ] 23.95MB/76.12MB c124ba1a8b26 Extracting [==================> ] 34.54MB/91.87MB 55f2b468da67 Downloading [=============================================> ] 234.1MB/257.9MB 4ba79830ebce Extracting [================================================> ] 162.1MB/166.8MB dcc0c3b2850c Extracting [===================> ] 29.52MB/76.12MB e0a9246a993d Extracting [====================================> ] 52.36MB/71.91MB c124ba1a8b26 Extracting [=========================> ] 46.24MB/91.87MB 55f2b468da67 Downloading [================================================> ] 251.4MB/257.9MB 4ba79830ebce Extracting [=================================================> ] 164.3MB/166.8MB 55f2b468da67 Verifying Checksum 55f2b468da67 Download complete dcc0c3b2850c Extracting [========================> ] 37.88MB/76.12MB e0a9246a993d Extracting [======================================> ] 55.71MB/71.91MB c124ba1a8b26 Extracting [===============================> ] 57.38MB/91.87MB 4ba79830ebce Extracting [=================================================> ] 166.6MB/166.8MB dcc0c3b2850c Extracting [==============================> ] 46.79MB/76.12MB e0a9246a993d Extracting [========================================> ] 58.49MB/71.91MB c124ba1a8b26 Extracting [====================================> ] 67.96MB/91.87MB 55f2b468da67 Extracting [> ] 557.1kB/257.9MB 4ba79830ebce Extracting [==================================================>] 166.8MB/166.8MB dcc0c3b2850c Extracting [====================================> ] 55.15MB/76.12MB c124ba1a8b26 Extracting [==========================================> ] 77.99MB/91.87MB e0a9246a993d Extracting [===========================================> ] 62.39MB/71.91MB 55f2b468da67 Extracting [==> ] 10.58MB/257.9MB 4ba79830ebce Pull complete d223479d7367 Extracting [> ] 98.3kB/6.742MB dcc0c3b2850c Extracting [==========================================> ] 64.06MB/76.12MB c124ba1a8b26 Extracting [===============================================> ] 86.9MB/91.87MB e0a9246a993d Extracting [==============================================> ] 67.4MB/71.91MB 55f2b468da67 Extracting [===> ] 20.61MB/257.9MB d223479d7367 Extracting [==> ] 294.9kB/6.742MB c124ba1a8b26 Extracting [==================================================>] 91.87MB/91.87MB dcc0c3b2850c Extracting [===============================================> ] 72.42MB/76.12MB c124ba1a8b26 Pull complete 6394804c2196 Extracting [==================================================>] 1.299kB/1.299kB dcc0c3b2850c Extracting [==================================================>] 76.12MB/76.12MB 6394804c2196 Extracting [==================================================>] 1.299kB/1.299kB e0a9246a993d Extracting [=================================================> ] 71.3MB/71.91MB d223479d7367 Extracting [==================> ] 2.458MB/6.742MB dcc0c3b2850c Pull complete eb7cda286a15 Extracting [==================================================>] 1.119kB/1.119kB eb7cda286a15 Extracting [==================================================>] 1.119kB/1.119kB e0a9246a993d Extracting [==================================================>] 71.91MB/71.91MB 55f2b468da67 Extracting [====> ] 24.51MB/257.9MB d223479d7367 Extracting [==========================> ] 3.637MB/6.742MB 6394804c2196 Pull complete e0a9246a993d Pull complete 5179ab305f38 Extracting [==================================================>] 306B/306B 5179ab305f38 Extracting [==================================================>] 306B/306B pap Pulled eb7cda286a15 Pull complete 55f2b468da67 Extracting [======> ] 33.42MB/257.9MB d223479d7367 Extracting [======================================> ] 5.21MB/6.742MB api Pulled 5179ab305f38 Pull complete d223479d7367 Extracting [==================================================>] 6.742MB/6.742MB 55f2b468da67 Extracting [========> ] 46.24MB/257.9MB d223479d7367 Pull complete ece604b40811 Extracting [==================================================>] 303B/303B ece604b40811 Extracting [==================================================>] 303B/303B 18ce86a3284e Extracting [> ] 557.1kB/182.3MB 55f2b468da67 Extracting [===========> ] 59.6MB/257.9MB 18ce86a3284e Extracting [===> ] 12.81MB/182.3MB 55f2b468da67 Extracting [==============> ] 73.53MB/257.9MB ece604b40811 Pull complete 18ce86a3284e Extracting [=======> ] 27.85MB/182.3MB 55f2b468da67 Extracting [================> ] 83MB/257.9MB 18ce86a3284e Extracting [===========> ] 41.22MB/182.3MB 55f2b468da67 Extracting [==================> ] 93.03MB/257.9MB c01e672f2391 Extracting [> ] 557.1kB/263.6MB 18ce86a3284e Extracting [==============> ] 54.59MB/182.3MB c01e672f2391 Extracting [> ] 1.671MB/263.6MB 55f2b468da67 Extracting [===================> ] 102.5MB/257.9MB 18ce86a3284e Extracting [==================> ] 67.96MB/182.3MB c01e672f2391 Extracting [==> ] 14.48MB/263.6MB 55f2b468da67 Extracting [=====================> ] 108.6MB/257.9MB 18ce86a3284e Extracting [======================> ] 81.33MB/182.3MB c01e672f2391 Extracting [=====> ] 27.3MB/263.6MB 55f2b468da67 Extracting [=====================> ] 113.1MB/257.9MB 18ce86a3284e Extracting [==========================> ] 95.26MB/182.3MB c01e672f2391 Extracting [=======> ] 37.88MB/263.6MB 55f2b468da67 Extracting [======================> ] 118.1MB/257.9MB 18ce86a3284e Extracting [==============================> ] 110.3MB/182.3MB c01e672f2391 Extracting [=========> ] 49.58MB/263.6MB 55f2b468da67 Extracting [=======================> ] 122MB/257.9MB 18ce86a3284e Extracting [==================================> ] 125.9MB/182.3MB c01e672f2391 Extracting [===========> ] 58.49MB/263.6MB 55f2b468da67 Extracting [========================> ] 126.5MB/257.9MB 18ce86a3284e Extracting [====================================> ] 133.1MB/182.3MB c01e672f2391 Extracting [============> ] 66.85MB/263.6MB 55f2b468da67 Extracting [=========================> ] 132MB/257.9MB 09d5a3f70313 Downloading [============> ] 27.03MB/109.2MB 18ce86a3284e Extracting [=======================================> ] 142.6MB/182.3MB 6ac0e4adf315 Downloading [======> ] 7.568MB/62.07MB f3b09c502777 Downloading [> ] 539.6kB/56.52MB c01e672f2391 Extracting [==============> ] 76.32MB/263.6MB 55f2b468da67 Extracting [==========================> ] 138.1MB/257.9MB 09d5a3f70313 Downloading [================> ] 35.14MB/109.2MB 18ce86a3284e Extracting [==========================================> ] 153.2MB/182.3MB 6ac0e4adf315 Downloading [===========> ] 14.06MB/62.07MB f3b09c502777 Downloading [=====> ] 5.946MB/56.52MB c01e672f2391 Extracting [===============> ] 84.12MB/263.6MB 55f2b468da67 Extracting [===========================> ] 142.6MB/257.9MB 18ce86a3284e Extracting [============================================> ] 162.7MB/182.3MB 09d5a3f70313 Downloading [===================> ] 42.17MB/109.2MB 6ac0e4adf315 Downloading [================> ] 21.09MB/62.07MB f3b09c502777 Downloading [===========> ] 12.43MB/56.52MB c01e672f2391 Extracting [==================> ] 95.81MB/263.6MB 55f2b468da67 Extracting [============================> ] 145.4MB/257.9MB 18ce86a3284e Extracting [================================================> ] 176MB/182.3MB 09d5a3f70313 Downloading [======================> ] 48.66MB/109.2MB 6ac0e4adf315 Downloading [======================> ] 27.57MB/62.07MB f3b09c502777 Downloading [================> ] 18.92MB/56.52MB c01e672f2391 Extracting [====================> ] 106.4MB/263.6MB 18ce86a3284e Extracting [==================================================>] 182.3MB/182.3MB 55f2b468da67 Extracting [============================> ] 149.3MB/257.9MB 09d5a3f70313 Downloading [=========================> ] 55.15MB/109.2MB 18ce86a3284e Pull complete 098efa8b34b7 Extracting [==================================================>] 1.154kB/1.154kB 098efa8b34b7 Extracting [==================================================>] 1.154kB/1.154kB 6ac0e4adf315 Downloading [===========================> ] 34.6MB/62.07MB f3b09c502777 Downloading [======================> ] 25.95MB/56.52MB c01e672f2391 Extracting [======================> ] 118.7MB/263.6MB 55f2b468da67 Extracting [=============================> ] 153.7MB/257.9MB 09d5a3f70313 Downloading [============================> ] 62.72MB/109.2MB 6ac0e4adf315 Downloading [==================================> ] 42.71MB/62.07MB f3b09c502777 Downloading [=============================> ] 32.98MB/56.52MB c01e672f2391 Extracting [=========================> ] 132MB/263.6MB 098efa8b34b7 Pull complete 614e034e242f Extracting [==================================================>] 1.126kB/1.126kB 614e034e242f Extracting [==================================================>] 1.126kB/1.126kB 55f2b468da67 Extracting [==============================> ] 157.6MB/257.9MB 09d5a3f70313 Downloading [================================> ] 70.29MB/109.2MB 6ac0e4adf315 Downloading [========================================> ] 50.82MB/62.07MB f3b09c502777 Downloading [====================================> ] 41.09MB/56.52MB c01e672f2391 Extracting [===========================> ] 143.7MB/263.6MB 614e034e242f Pull complete simulator Pulled 55f2b468da67 Extracting [===============================> ] 162.7MB/257.9MB 09d5a3f70313 Downloading [===================================> ] 76.77MB/109.2MB 6ac0e4adf315 Downloading [==============================================> ] 57.85MB/62.07MB f3b09c502777 Downloading [=========================================> ] 47.04MB/56.52MB c01e672f2391 Extracting [==============================> ] 158.2MB/263.6MB 6ac0e4adf315 Verifying Checksum 6ac0e4adf315 Download complete 55f2b468da67 Extracting [================================> ] 167.7MB/257.9MB 09d5a3f70313 Downloading [=====================================> ] 82.72MB/109.2MB 408012a7b118 Download complete f3b09c502777 Downloading [===============================================> ] 53.53MB/56.52MB 44986281b8b9 Downloading [=====================================> ] 3.011kB/4.022kB 44986281b8b9 Downloading [==================================================>] 4.022kB/4.022kB 44986281b8b9 Download complete c01e672f2391 Extracting [================================> ] 171.6MB/263.6MB bf70c5107ab5 Downloading [==================================================>] 1.44kB/1.44kB bf70c5107ab5 Verifying Checksum bf70c5107ab5 Download complete 1ccde423731d Downloading [==> ] 3.01kB/61.44kB f3b09c502777 Verifying Checksum f3b09c502777 Download complete 1ccde423731d Downloading [==================================================>] 61.44kB/61.44kB 1ccde423731d Verifying Checksum 1ccde423731d Download complete 7221d93db8a9 Downloading [==================================================>] 100B/100B 7221d93db8a9 Verifying Checksum 7221d93db8a9 Download complete 7df673c7455d Downloading [==================================================>] 694B/694B 7df673c7455d Verifying Checksum 7df673c7455d Download complete e444bcd4d577 Downloading [==================================================>] 279B/279B e444bcd4d577 Verifying Checksum e444bcd4d577 Download complete eca0188f477e Downloading [> ] 375.7kB/37.17MB 55f2b468da67 Extracting [=================================> ] 170.5MB/257.9MB 09d5a3f70313 Downloading [=========================================> ] 91.37MB/109.2MB 6ac0e4adf315 Extracting [> ] 557.1kB/62.07MB eabd8714fec9 Downloading [> ] 539.6kB/375MB c01e672f2391 Extracting [==================================> ] 181.6MB/263.6MB eca0188f477e Downloading [=======> ] 5.275MB/37.17MB 09d5a3f70313 Downloading [=============================================> ] 100MB/109.2MB 6ac0e4adf315 Extracting [====> ] 5.014MB/62.07MB eabd8714fec9 Downloading [=> ] 8.109MB/375MB c01e672f2391 Extracting [====================================> ] 192.7MB/263.6MB 55f2b468da67 Extracting [=================================> ] 172.1MB/257.9MB eca0188f477e Downloading [==================> ] 13.94MB/37.17MB 09d5a3f70313 Downloading [=================================================> ] 108.7MB/109.2MB 09d5a3f70313 Verifying Checksum 09d5a3f70313 Download complete eabd8714fec9 Downloading [==> ] 15.68MB/375MB 45fd2fec8a19 Downloading [==================================================>] 1.103kB/1.103kB c01e672f2391 Extracting [======================================> ] 201.1MB/263.6MB 6ac0e4adf315 Extracting [======> ] 8.356MB/62.07MB 8f10199ed94b Downloading [> ] 97.22kB/8.768MB 55f2b468da67 Extracting [=================================> ] 173.8MB/257.9MB eca0188f477e Downloading [=============================> ] 22.23MB/37.17MB eabd8714fec9 Downloading [===> ] 24.33MB/375MB c01e672f2391 Extracting [=======================================> ] 208.9MB/263.6MB 6ac0e4adf315 Extracting [=========> ] 11.7MB/62.07MB 8f10199ed94b Downloading [============================> ] 5.012MB/8.768MB eca0188f477e Downloading [========================================> ] 29.77MB/37.17MB 55f2b468da67 Extracting [==================================> ] 175.5MB/257.9MB 8f10199ed94b Verifying Checksum 8f10199ed94b Download complete eabd8714fec9 Downloading [====> ] 31.9MB/375MB c01e672f2391 Extracting [=========================================> ] 218.9MB/263.6MB f963a77d2726 Downloading [=======> ] 3.01kB/21.44kB f963a77d2726 Download complete 6ac0e4adf315 Extracting [============> ] 15.6MB/62.07MB f3a82e9f1761 Downloading [> ] 457.7kB/44.41MB eca0188f477e Verifying Checksum eca0188f477e Download complete 79161a3f5362 Downloading [================================> ] 3.011kB/4.656kB 55f2b468da67 Extracting [==================================> ] 177.1MB/257.9MB 79161a3f5362 Download complete eabd8714fec9 Downloading [=====> ] 40.01MB/375MB 9c266ba63f51 Downloading [==================================================>] 1.105kB/1.105kB 9c266ba63f51 Verifying Checksum 9c266ba63f51 Download complete c01e672f2391 Extracting [===========================================> ] 227.8MB/263.6MB 2e8a7df9c2ee Downloading [==================================================>] 851B/851B 2e8a7df9c2ee Verifying Checksum 2e8a7df9c2ee Download complete 10f05dd8b1db Downloading [==================================================>] 98B/98B 10f05dd8b1db Verifying Checksum 10f05dd8b1db Download complete 6ac0e4adf315 Extracting [================> ] 20.05MB/62.07MB 41dac8b43ba6 Downloading [==================================================>] 171B/171B 41dac8b43ba6 Download complete f3a82e9f1761 Downloading [======> ] 5.504MB/44.41MB 71a9f6a9ab4d Downloading [> ] 3.009kB/230.6kB 71a9f6a9ab4d Downloading [==================================================>] 230.6kB/230.6kB 71a9f6a9ab4d Verifying Checksum 71a9f6a9ab4d Download complete eca0188f477e Extracting [> ] 393.2kB/37.17MB da3ed5db7103 Downloading [> ] 539.6kB/127.4MB 55f2b468da67 Extracting [==================================> ] 180.5MB/257.9MB eabd8714fec9 Downloading [======> ] 48.66MB/375MB c01e672f2391 Extracting [============================================> ] 236.2MB/263.6MB f3a82e9f1761 Downloading [=============> ] 12.39MB/44.41MB 6ac0e4adf315 Extracting [===================> ] 23.95MB/62.07MB eca0188f477e Extracting [=====> ] 4.325MB/37.17MB da3ed5db7103 Downloading [==> ] 5.946MB/127.4MB 55f2b468da67 Extracting [===================================> ] 184.4MB/257.9MB eabd8714fec9 Downloading [=======> ] 56.23MB/375MB c01e672f2391 Extracting [==============================================> ] 246.8MB/263.6MB f3a82e9f1761 Downloading [=======================> ] 20.64MB/44.41MB 6ac0e4adf315 Extracting [======================> ] 28.41MB/62.07MB da3ed5db7103 Downloading [=====> ] 14.06MB/127.4MB eca0188f477e Extracting [========> ] 6.685MB/37.17MB 55f2b468da67 Extracting [====================================> ] 190MB/257.9MB eabd8714fec9 Downloading [========> ] 65.96MB/375MB c01e672f2391 Extracting [================================================> ] 255.1MB/263.6MB f3a82e9f1761 Downloading [=================================> ] 29.82MB/44.41MB 6ac0e4adf315 Extracting [=========================> ] 31.75MB/62.07MB eca0188f477e Extracting [=============> ] 9.83MB/37.17MB da3ed5db7103 Downloading [=========> ] 24.33MB/127.4MB 55f2b468da67 Extracting [=====================================> ] 193.3MB/257.9MB eabd8714fec9 Downloading [==========> ] 75.15MB/375MB c01e672f2391 Extracting [=================================================> ] 262.9MB/263.6MB c01e672f2391 Extracting [==================================================>] 263.6MB/263.6MB f3a82e9f1761 Downloading [===========================================> ] 38.99MB/44.41MB 6ac0e4adf315 Extracting [=================================> ] 41.22MB/62.07MB eca0188f477e Extracting [================> ] 12.58MB/37.17MB da3ed5db7103 Downloading [===========> ] 30.28MB/127.4MB f3a82e9f1761 Verifying Checksum f3a82e9f1761 Download complete 55f2b468da67 Extracting [=====================================> ] 195.5MB/257.9MB c955f6e31a04 Downloading [===========================================> ] 3.011kB/3.446kB eabd8714fec9 Downloading [===========> ] 85.97MB/375MB c955f6e31a04 Download complete da3ed5db7103 Downloading [============> ] 31.9MB/127.4MB eca0188f477e Extracting [==================> ] 13.76MB/37.17MB 6ac0e4adf315 Extracting [=====================================> ] 46.24MB/62.07MB c01e672f2391 Pull complete f18232174bc9 Downloading [> ] 48.06kB/3.642MB apex-pdp Pulled 55f2b468da67 Extracting [======================================> ] 196.1MB/257.9MB eabd8714fec9 Downloading [=============> ] 98.94MB/375MB da3ed5db7103 Downloading [=================> ] 43.79MB/127.4MB eca0188f477e Extracting [======================> ] 16.91MB/37.17MB 6ac0e4adf315 Extracting [===========================================> ] 54.03MB/62.07MB f18232174bc9 Downloading [===================> ] 1.424MB/3.642MB 55f2b468da67 Extracting [======================================> ] 196.6MB/257.9MB eabd8714fec9 Downloading [==============> ] 112.5MB/375MB da3ed5db7103 Downloading [====================> ] 53.53MB/127.4MB eca0188f477e Extracting [============================> ] 20.84MB/37.17MB 6ac0e4adf315 Extracting [===============================================> ] 59.05MB/62.07MB f18232174bc9 Downloading [============================================> ] 3.243MB/3.642MB f18232174bc9 Verifying Checksum f18232174bc9 Download complete f18232174bc9 Extracting [> ] 65.54kB/3.642MB 55f2b468da67 Extracting [======================================> ] 197.8MB/257.9MB e60d9caeb0b8 Downloading [==================================================>] 140B/140B e60d9caeb0b8 Download complete eabd8714fec9 Downloading [================> ] 122.7MB/375MB da3ed5db7103 Downloading [======================> ] 58.39MB/127.4MB eca0188f477e Extracting [=================================> ] 24.77MB/37.17MB f61a19743345 Downloading [> ] 48.06kB/3.524MB 6ac0e4adf315 Extracting [=================================================> ] 61.28MB/62.07MB f18232174bc9 Extracting [=====> ] 393.2kB/3.642MB 55f2b468da67 Extracting [======================================> ] 200MB/257.9MB eabd8714fec9 Downloading [=================> ] 132.5MB/375MB f61a19743345 Download complete da3ed5db7103 Downloading [==========================> ] 66.5MB/127.4MB eca0188f477e Extracting [====================================> ] 27.13MB/37.17MB 8af57d8c9f49 Downloading [> ] 97.22kB/8.735MB 6ac0e4adf315 Extracting [==================================================>] 62.07MB/62.07MB f18232174bc9 Extracting [==================================================>] 3.642MB/3.642MB f18232174bc9 Extracting [==================================================>] 3.642MB/3.642MB 55f2b468da67 Extracting [=======================================> ] 201.7MB/257.9MB eabd8714fec9 Downloading [==================> ] 141.7MB/375MB da3ed5db7103 Downloading [=============================> ] 75.15MB/127.4MB eca0188f477e Extracting [========================================> ] 30.28MB/37.17MB 8af57d8c9f49 Downloading [=========> ] 1.67MB/8.735MB da3ed5db7103 Downloading [===============================> ] 80.02MB/127.4MB eabd8714fec9 Downloading [===================> ] 149.2MB/375MB 8af57d8c9f49 Downloading [=============> ] 2.358MB/8.735MB f18232174bc9 Pull complete 6ac0e4adf315 Pull complete eca0188f477e Extracting [===========================================> ] 32.64MB/37.17MB e60d9caeb0b8 Extracting [==================================================>] 140B/140B e60d9caeb0b8 Extracting [==================================================>] 140B/140B 55f2b468da67 Extracting [=======================================> ] 203.3MB/257.9MB da3ed5db7103 Downloading [=================================> ] 86.51MB/127.4MB eabd8714fec9 Downloading [====================> ] 156.3MB/375MB 8af57d8c9f49 Downloading [============================> ] 5.012MB/8.735MB 8af57d8c9f49 Verifying Checksum 8af57d8c9f49 Download complete eca0188f477e Extracting [==============================================> ] 34.6MB/37.17MB c53a11b7c6fc Downloading [==> ] 3.01kB/58.08kB c53a11b7c6fc Download complete da3ed5db7103 Downloading [=======================================> ] 101.6MB/127.4MB eabd8714fec9 Downloading [======================> ] 170.9MB/375MB 55f2b468da67 Extracting [=======================================> ] 205MB/257.9MB e032d0a5e409 Downloading [=====> ] 3.01kB/27.77kB e032d0a5e409 Verifying Checksum e032d0a5e409 Download complete c49e0ee60bfb Downloading [> ] 539.6kB/107.3MB eca0188f477e Extracting [=================================================> ] 36.57MB/37.17MB eca0188f477e Extracting [==================================================>] 37.17MB/37.17MB da3ed5db7103 Downloading [==============================================> ] 118.9MB/127.4MB eabd8714fec9 Downloading [========================> ] 186.5MB/375MB 55f2b468da67 Extracting [========================================> ] 206.7MB/257.9MB da3ed5db7103 Verifying Checksum da3ed5db7103 Download complete c49e0ee60bfb Downloading [==> ] 5.406MB/107.3MB eabd8714fec9 Downloading [==========================> ] 195.7MB/375MB 384497dbce3b Downloading [> ] 539.6kB/63.48MB 55f2b468da67 Extracting [========================================> ] 207.8MB/257.9MB c49e0ee60bfb Downloading [======> ] 12.98MB/107.3MB eabd8714fec9 Downloading [============================> ] 212.5MB/375MB 384497dbce3b Downloading [==> ] 2.702MB/63.48MB 55f2b468da67 Extracting [========================================> ] 210.6MB/257.9MB c49e0ee60bfb Downloading [==========> ] 21.63MB/107.3MB eabd8714fec9 Downloading [==============================> ] 226MB/375MB 384497dbce3b Downloading [=======> ] 9.731MB/63.48MB 55f2b468da67 Extracting [=========================================> ] 212.8MB/257.9MB c49e0ee60bfb Downloading [==============> ] 30.28MB/107.3MB eabd8714fec9 Downloading [================================> ] 243.8MB/375MB 384497dbce3b Downloading [=============> ] 16.76MB/63.48MB c49e0ee60bfb Downloading [==================> ] 39.47MB/107.3MB 55f2b468da67 Extracting [=========================================> ] 215.6MB/257.9MB eabd8714fec9 Downloading [==================================> ] 260.1MB/375MB 384497dbce3b Downloading [==================> ] 23.79MB/63.48MB c49e0ee60bfb Downloading [=======================> ] 50.82MB/107.3MB eabd8714fec9 Downloading [====================================> ] 276.3MB/375MB 55f2b468da67 Extracting [==========================================> ] 220MB/257.9MB 384497dbce3b Downloading [==========================> ] 33.52MB/63.48MB c49e0ee60bfb Downloading [============================> ] 62.18MB/107.3MB eabd8714fec9 Downloading [======================================> ] 288.2MB/375MB 55f2b468da67 Extracting [===========================================> ] 223.4MB/257.9MB 384497dbce3b Downloading [===================================> ] 44.87MB/63.48MB c49e0ee60bfb Downloading [===================================> ] 75.69MB/107.3MB eabd8714fec9 Downloading [=======================================> ] 299.5MB/375MB 55f2b468da67 Extracting [===========================================> ] 226.7MB/257.9MB 384497dbce3b Downloading [=============================================> ] 57.31MB/63.48MB 384497dbce3b Verifying Checksum 384497dbce3b Download complete 055b9255fa03 Downloading [============> ] 3.01kB/11.92kB 055b9255fa03 Downloading [==================================================>] 11.92kB/11.92kB 055b9255fa03 Verifying Checksum 055b9255fa03 Download complete c49e0ee60bfb Downloading [==========================================> ] 90.83MB/107.3MB eabd8714fec9 Downloading [=========================================> ] 312MB/375MB b176d7edde70 Download complete 55f2b468da67 Extracting [============================================> ] 229MB/257.9MB c49e0ee60bfb Verifying Checksum c49e0ee60bfb Download complete 46eab5b44a35 Downloading [==================================================>] 1.168kB/1.168kB 46eab5b44a35 Verifying Checksum 46eab5b44a35 Download complete 2d429b9e73a6 Downloading [> ] 293.8kB/29.13MB eabd8714fec9 Downloading [==========================================> ] 321.7MB/375MB 55f2b468da67 Extracting [============================================> ] 230.6MB/257.9MB c4d302cc468d Downloading [> ] 48.06kB/4.534MB f3b09c502777 Extracting [> ] 557.1kB/56.52MB 2d429b9e73a6 Downloading [========> ] 5.012MB/29.13MB eabd8714fec9 Downloading [============================================> ] 335.8MB/375MB c4d302cc468d Downloading [===================================> ] 3.194MB/4.534MB 55f2b468da67 Extracting [=============================================> ] 232.3MB/257.9MB c4d302cc468d Verifying Checksum c4d302cc468d Download complete 01e0882c90d9 Downloading [> ] 15.3kB/1.447MB f3b09c502777 Extracting [====> ] 5.571MB/56.52MB 2d429b9e73a6 Downloading [=========================> ] 15.04MB/29.13MB eabd8714fec9 Downloading [==============================================> ] 352MB/375MB 01e0882c90d9 Verifying Checksum 01e0882c90d9 Download complete 531ee2cf3c0c Downloading [> ] 80.83kB/8.066MB 55f2b468da67 Extracting [=============================================> ] 235.1MB/257.9MB eca0188f477e Pull complete 531ee2cf3c0c Downloading [==> ] 408.5kB/8.066MB eabd8714fec9 Downloading [================================================> ] 362.2MB/375MB 2d429b9e73a6 Downloading [======================================> ] 22.41MB/29.13MB f3b09c502777 Extracting [=======> ] 8.356MB/56.52MB 55f2b468da67 Extracting [=============================================> ] 235.6MB/257.9MB 2d429b9e73a6 Verifying Checksum 2d429b9e73a6 Download complete eabd8714fec9 Downloading [=================================================> ] 374.1MB/375MB eabd8714fec9 Verifying Checksum eabd8714fec9 Download complete 531ee2cf3c0c Downloading [======================> ] 3.685MB/8.066MB 55f2b468da67 Extracting [==============================================> ] 237.9MB/257.9MB f3b09c502777 Extracting [=========> ] 10.58MB/56.52MB 12c5c803443f Downloading [==================================================>] 116B/116B 12c5c803443f Verifying Checksum 12c5c803443f Download complete ed54a7dee1d8 Downloading [> ] 15.3kB/1.196MB 2d429b9e73a6 Extracting [> ] 294.9kB/29.13MB e27c75a98748 Downloading [===============================================> ] 3.011kB/3.144kB e27c75a98748 Downloading [==================================================>] 3.144kB/3.144kB e27c75a98748 Verifying Checksum e27c75a98748 Download complete ed54a7dee1d8 Verifying Checksum ed54a7dee1d8 Download complete 531ee2cf3c0c Verifying Checksum 531ee2cf3c0c Download complete a83b68436f09 Downloading [===============> ] 3.011kB/9.919kB a83b68436f09 Downloading [==================================================>] 9.919kB/9.919kB a83b68436f09 Verifying Checksum a83b68436f09 Download complete 787d6bee9571 Downloading [==================================================>] 127B/127B e73cb4a42719 Downloading [> ] 539.6kB/109.1MB 787d6bee9571 Verifying Checksum 787d6bee9571 Download complete 13ff0988aaea Downloading [==================================================>] 167B/167B 13ff0988aaea Verifying Checksum 13ff0988aaea Download complete f3b09c502777 Extracting [============> ] 13.93MB/56.52MB 7e568a0dc8fb Downloading [==================================================>] 184B/184B 7e568a0dc8fb Verifying Checksum 7e568a0dc8fb Download complete 4b82842ab819 Downloading [===========================> ] 3.011kB/5.415kB 4b82842ab819 Downloading [==================================================>] 5.415kB/5.415kB 4b82842ab819 Download complete e73cb4a42719 Downloading [=> ] 2.702MB/109.1MB f3b09c502777 Extracting [===============> ] 17.83MB/56.52MB e60d9caeb0b8 Pull complete 2d429b9e73a6 Extracting [====> ] 2.359MB/29.13MB 55f2b468da67 Extracting [==============================================> ] 241.2MB/257.9MB e444bcd4d577 Extracting [==================================================>] 279B/279B e444bcd4d577 Extracting [==================================================>] 279B/279B e73cb4a42719 Downloading [==> ] 6.487MB/109.1MB f61a19743345 Extracting [> ] 65.54kB/3.524MB f3b09c502777 Extracting [=================> ] 20.05MB/56.52MB 2d429b9e73a6 Extracting [========> ] 5.014MB/29.13MB e73cb4a42719 Downloading [=====> ] 12.98MB/109.1MB f61a19743345 Extracting [=========> ] 655.4kB/3.524MB f3b09c502777 Extracting [====================> ] 22.84MB/56.52MB 2d429b9e73a6 Extracting [=============> ] 7.963MB/29.13MB e73cb4a42719 Downloading [========> ] 17.84MB/109.1MB f61a19743345 Extracting [================> ] 1.18MB/3.524MB e444bcd4d577 Pull complete 2d429b9e73a6 Extracting [================> ] 9.437MB/29.13MB f3b09c502777 Extracting [=======================> ] 26.74MB/56.52MB e73cb4a42719 Downloading [==========> ] 22.71MB/109.1MB f61a19743345 Extracting [==================================================>] 3.524MB/3.524MB f61a19743345 Extracting [==================================================>] 3.524MB/3.524MB 55f2b468da67 Extracting [===============================================> ] 244.5MB/257.9MB 2d429b9e73a6 Extracting [=================> ] 10.32MB/29.13MB f3b09c502777 Extracting [==========================> ] 30.08MB/56.52MB eabd8714fec9 Extracting [> ] 557.1kB/375MB e73cb4a42719 Downloading [=============> ] 29.2MB/109.1MB 55f2b468da67 Extracting [================================================> ] 250.7MB/257.9MB 2d429b9e73a6 Extracting [======================> ] 12.98MB/29.13MB f3b09c502777 Extracting [==================================> ] 39.55MB/56.52MB eabd8714fec9 Extracting [=> ] 11.14MB/375MB e73cb4a42719 Downloading [=================> ] 38.39MB/109.1MB 55f2b468da67 Extracting [=================================================> ] 253.5MB/257.9MB 2d429b9e73a6 Extracting [============================> ] 16.81MB/29.13MB f3b09c502777 Extracting [===========================================> ] 49.58MB/56.52MB eabd8714fec9 Extracting [==> ] 18.38MB/375MB e73cb4a42719 Downloading [========================> ] 52.44MB/109.1MB 55f2b468da67 Extracting [=================================================> ] 256.2MB/257.9MB 2d429b9e73a6 Extracting [=====================================> ] 21.82MB/29.13MB f3b09c502777 Extracting [=================================================> ] 56.26MB/56.52MB eabd8714fec9 Extracting [===> ] 23.4MB/375MB 55f2b468da67 Extracting [==================================================>] 257.9MB/257.9MB 55f2b468da67 Extracting [==================================================>] 257.9MB/257.9MB e73cb4a42719 Downloading [==============================> ] 66.5MB/109.1MB f3b09c502777 Extracting [==================================================>] 56.52MB/56.52MB e73cb4a42719 Downloading [=================================> ] 74.07MB/109.1MB 2d429b9e73a6 Extracting [=========================================> ] 24.18MB/29.13MB eabd8714fec9 Extracting [===> ] 23.95MB/375MB e73cb4a42719 Downloading [========================================> ] 87.59MB/109.1MB 2d429b9e73a6 Extracting [===========================================> ] 25.07MB/29.13MB eabd8714fec9 Extracting [====> ] 32.31MB/375MB e73cb4a42719 Downloading [===============================================> ] 102.7MB/109.1MB f61a19743345 Pull complete eabd8714fec9 Extracting [=====> ] 44.56MB/375MB e73cb4a42719 Downloading [================================================> ] 104.9MB/109.1MB e73cb4a42719 Verifying Checksum e73cb4a42719 Download complete 2d429b9e73a6 Extracting [================================================> ] 28.02MB/29.13MB f3b09c502777 Pull complete eabd8714fec9 Extracting [=======> ] 54.59MB/375MB eabd8714fec9 Extracting [=======> ] 58.49MB/375MB 2d429b9e73a6 Extracting [================================================> ] 28.31MB/29.13MB 2d429b9e73a6 Extracting [==================================================>] 29.13MB/29.13MB eabd8714fec9 Extracting [=========> ] 67.96MB/375MB eabd8714fec9 Extracting [===========> ] 83.56MB/375MB eabd8714fec9 Extracting [============> ] 96.37MB/375MB 8af57d8c9f49 Extracting [> ] 98.3kB/8.735MB 8af57d8c9f49 Extracting [====> ] 786.4kB/8.735MB eabd8714fec9 Extracting [==============> ] 105.3MB/375MB 8af57d8c9f49 Extracting [======================================> ] 6.685MB/8.735MB eabd8714fec9 Extracting [==============> ] 109.7MB/375MB 55f2b468da67 Pull complete 8af57d8c9f49 Extracting [==================================================>] 8.735MB/8.735MB eabd8714fec9 Extracting [===============> ] 115.3MB/375MB 408012a7b118 Extracting [==================================================>] 637B/637B 408012a7b118 Extracting [==================================================>] 637B/637B eabd8714fec9 Extracting [===============> ] 118.1MB/375MB eabd8714fec9 Extracting [================> ] 125.3MB/375MB eabd8714fec9 Extracting [=================> ] 132MB/375MB eabd8714fec9 Extracting [==================> ] 137.6MB/375MB eabd8714fec9 Extracting [===================> ] 143.7MB/375MB eabd8714fec9 Extracting [===================> ] 149.3MB/375MB 2d429b9e73a6 Pull complete eabd8714fec9 Extracting [====================> ] 153.7MB/375MB 82bfc142787e Extracting [> ] 98.3kB/8.613MB eabd8714fec9 Extracting [====================> ] 156.5MB/375MB 82bfc142787e Extracting [=====> ] 884.7kB/8.613MB eabd8714fec9 Extracting [=====================> ] 159.3MB/375MB 82bfc142787e Extracting [==================================================>] 8.613MB/8.613MB eabd8714fec9 Extracting [=====================> ] 164.9MB/375MB 46eab5b44a35 Extracting [==================================================>] 1.168kB/1.168kB 46eab5b44a35 Extracting [==================================================>] 1.168kB/1.168kB eabd8714fec9 Extracting [=======================> ] 176.6MB/375MB eabd8714fec9 Extracting [=========================> ] 195MB/375MB 8af57d8c9f49 Pull complete eabd8714fec9 Extracting [============================> ] 211.7MB/375MB eabd8714fec9 Extracting [=============================> ] 217.8MB/375MB 408012a7b118 Pull complete eabd8714fec9 Extracting [=============================> ] 221.7MB/375MB c53a11b7c6fc Extracting [============================> ] 32.77kB/58.08kB c53a11b7c6fc Extracting [==================================================>] 58.08kB/58.08kB c53a11b7c6fc Extracting [==================================================>] 58.08kB/58.08kB eabd8714fec9 Extracting [==============================> ] 228.4MB/375MB eabd8714fec9 Extracting [===============================> ] 235.1MB/375MB eabd8714fec9 Extracting [================================> ] 241.8MB/375MB eabd8714fec9 Extracting [================================> ] 247.3MB/375MB eabd8714fec9 Extracting [=================================> ] 251.8MB/375MB eabd8714fec9 Extracting [=================================> ] 252.9MB/375MB eabd8714fec9 Extracting [==================================> ] 258.5MB/375MB eabd8714fec9 Extracting [===================================> ] 265.2MB/375MB eabd8714fec9 Extracting [===================================> ] 268.5MB/375MB eabd8714fec9 Extracting [====================================> ] 270.2MB/375MB eabd8714fec9 Extracting [====================================> ] 271.8MB/375MB eabd8714fec9 Extracting [====================================> ] 273.5MB/375MB 82bfc142787e Pull complete 46eab5b44a35 Pull complete 44986281b8b9 Extracting [==================================================>] 4.022kB/4.022kB 44986281b8b9 Extracting [==================================================>] 4.022kB/4.022kB eabd8714fec9 Extracting [====================================> ] 275.2MB/375MB 46baca71a4ef Extracting [==================================================>] 18.11kB/18.11kB 46baca71a4ef Extracting [==================================================>] 18.11kB/18.11kB eabd8714fec9 Extracting [=====================================> ] 281.3MB/375MB eabd8714fec9 Extracting [======================================> ] 289.1MB/375MB eabd8714fec9 Extracting [=======================================> ] 293.6MB/375MB c53a11b7c6fc Pull complete eabd8714fec9 Extracting [=======================================> ] 295.2MB/375MB eabd8714fec9 Extracting [=======================================> ] 296.4MB/375MB eabd8714fec9 Extracting [=======================================> ] 299.1MB/375MB eabd8714fec9 Extracting [========================================> ] 301.9MB/375MB eabd8714fec9 Extracting [========================================> ] 304.2MB/375MB eabd8714fec9 Extracting [========================================> ] 306.9MB/375MB eabd8714fec9 Extracting [=========================================> ] 310.3MB/375MB c4d302cc468d Extracting [> ] 65.54kB/4.534MB eabd8714fec9 Extracting [=========================================> ] 313.1MB/375MB c4d302cc468d Extracting [======> ] 589.8kB/4.534MB eabd8714fec9 Extracting [=========================================> ] 314.2MB/375MB eabd8714fec9 Extracting [==========================================> ] 315.3MB/375MB c4d302cc468d Extracting [==================================================>] 4.534MB/4.534MB e032d0a5e409 Extracting [==================================================>] 27.77kB/27.77kB e032d0a5e409 Extracting [==================================================>] 27.77kB/27.77kB eabd8714fec9 Extracting [==========================================> ] 319.2MB/375MB eabd8714fec9 Extracting [==========================================> ] 321.4MB/375MB eabd8714fec9 Extracting [===========================================> ] 325.9MB/375MB 46baca71a4ef Pull complete 44986281b8b9 Pull complete c4d302cc468d Pull complete eabd8714fec9 Extracting [===========================================> ] 328.1MB/375MB b0e0ef7895f4 Extracting [> ] 393.2kB/37.01MB e032d0a5e409 Pull complete bf70c5107ab5 Extracting [==================================================>] 1.44kB/1.44kB bf70c5107ab5 Extracting [==================================================>] 1.44kB/1.44kB eabd8714fec9 Extracting [============================================> ] 330.3MB/375MB b0e0ef7895f4 Extracting [===============> ] 11.4MB/37.01MB b0e0ef7895f4 Extracting [========================> ] 18.48MB/37.01MB eabd8714fec9 Extracting [============================================> ] 332MB/375MB b0e0ef7895f4 Extracting [======================================> ] 28.7MB/37.01MB eabd8714fec9 Extracting [============================================> ] 335.3MB/375MB b0e0ef7895f4 Extracting [==================================================>] 37.01MB/37.01MB eabd8714fec9 Extracting [=============================================> ] 340.4MB/375MB eabd8714fec9 Extracting [=============================================> ] 341.5MB/375MB eabd8714fec9 Extracting [=============================================> ] 342MB/375MB c49e0ee60bfb Extracting [> ] 557.1kB/107.3MB eabd8714fec9 Extracting [=============================================> ] 342.6MB/375MB c49e0ee60bfb Extracting [==> ] 4.456MB/107.3MB c49e0ee60bfb Extracting [===> ] 8.356MB/107.3MB eabd8714fec9 Extracting [==============================================> ] 345.4MB/375MB c49e0ee60bfb Extracting [=====> ] 12.26MB/107.3MB eabd8714fec9 Extracting [==============================================> ] 347.6MB/375MB c49e0ee60bfb Extracting [=======> ] 16.15MB/107.3MB eabd8714fec9 Extracting [===============================================> ] 353.2MB/375MB 01e0882c90d9 Extracting [=> ] 32.77kB/1.447MB eabd8714fec9 Extracting [===============================================> ] 353.7MB/375MB c49e0ee60bfb Extracting [=======> ] 16.71MB/107.3MB c49e0ee60bfb Extracting [========> ] 17.83MB/107.3MB 01e0882c90d9 Extracting [==========> ] 294.9kB/1.447MB 01e0882c90d9 Extracting [==================================================>] 1.447MB/1.447MB eabd8714fec9 Extracting [===============================================> ] 357.1MB/375MB c49e0ee60bfb Extracting [==========> ] 21.73MB/107.3MB eabd8714fec9 Extracting [================================================> ] 360.4MB/375MB c49e0ee60bfb Extracting [=============> ] 29.52MB/107.3MB eabd8714fec9 Extracting [================================================> ] 362.6MB/375MB c49e0ee60bfb Extracting [================> ] 36.21MB/107.3MB eabd8714fec9 Extracting [=================================================> ] 368.8MB/375MB c49e0ee60bfb Extracting [==================> ] 40.11MB/107.3MB eabd8714fec9 Extracting [=================================================> ] 373.2MB/375MB eabd8714fec9 Extracting [==================================================>] 375MB/375MB c49e0ee60bfb Extracting [=====================> ] 45.12MB/107.3MB c49e0ee60bfb Extracting [======================> ] 47.91MB/107.3MB c49e0ee60bfb Extracting [========================> ] 51.81MB/107.3MB c49e0ee60bfb Extracting [==========================> ] 56.26MB/107.3MB c49e0ee60bfb Extracting [============================> ] 60.16MB/107.3MB c49e0ee60bfb Extracting [==============================> ] 64.62MB/107.3MB bf70c5107ab5 Pull complete b0e0ef7895f4 Pull complete c49e0ee60bfb Extracting [==============================> ] 66.29MB/107.3MB c49e0ee60bfb Extracting [=================================> ] 71.86MB/107.3MB c49e0ee60bfb Extracting [===================================> ] 75.76MB/107.3MB 1ccde423731d Extracting [==========================> ] 32.77kB/61.44kB 1ccde423731d Extracting [==================================================>] 61.44kB/61.44kB 01e0882c90d9 Pull complete c0c90eeb8aca Extracting [==================================================>] 1.105kB/1.105kB c0c90eeb8aca Extracting [==================================================>] 1.105kB/1.105kB eabd8714fec9 Pull complete 531ee2cf3c0c Extracting [> ] 98.3kB/8.066MB 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB c49e0ee60bfb Extracting [=====================================> ] 79.66MB/107.3MB 1ccde423731d Pull complete 531ee2cf3c0c Extracting [==> ] 393.2kB/8.066MB c0c90eeb8aca Pull complete 7221d93db8a9 Extracting [==================================================>] 100B/100B 7221d93db8a9 Extracting [==================================================>] 100B/100B 5cfb27c10ea5 Extracting [==================================================>] 852B/852B 45fd2fec8a19 Pull complete 5cfb27c10ea5 Extracting [==================================================>] 852B/852B 8f10199ed94b Extracting [> ] 98.3kB/8.768MB c49e0ee60bfb Extracting [======================================> ] 82.44MB/107.3MB 531ee2cf3c0c Extracting [=======================> ] 3.834MB/8.066MB 7221d93db8a9 Pull complete 7df673c7455d Extracting [==================================================>] 694B/694B 7df673c7455d Extracting [==================================================>] 694B/694B 5cfb27c10ea5 Pull complete c49e0ee60bfb Extracting [=======================================> ] 85.79MB/107.3MB 8f10199ed94b Extracting [==> ] 491.5kB/8.768MB 40a5eed61bb0 Extracting [==================================================>] 98B/98B 40a5eed61bb0 Extracting [==================================================>] 98B/98B 531ee2cf3c0c Extracting [=================================> ] 5.407MB/8.066MB c49e0ee60bfb Extracting [==========================================> ] 90.8MB/107.3MB 8f10199ed94b Extracting [===============================> ] 5.505MB/8.768MB 7df673c7455d Pull complete prometheus Pulled 40a5eed61bb0 Pull complete e040ea11fa10 Extracting [==================================================>] 173B/173B e040ea11fa10 Extracting [==================================================>] 173B/173B 531ee2cf3c0c Extracting [=================================================> ] 7.963MB/8.066MB 8f10199ed94b Extracting [==================================================>] 8.768MB/8.768MB 531ee2cf3c0c Extracting [==================================================>] 8.066MB/8.066MB c49e0ee60bfb Extracting [============================================> ] 95.81MB/107.3MB 8f10199ed94b Pull complete f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB e040ea11fa10 Pull complete 531ee2cf3c0c Pull complete ed54a7dee1d8 Extracting [=> ] 32.77kB/1.196MB c49e0ee60bfb Extracting [==============================================> ] 99.71MB/107.3MB 09d5a3f70313 Extracting [> ] 557.1kB/109.2MB f963a77d2726 Pull complete ed54a7dee1d8 Extracting [=========================================> ] 983kB/1.196MB ed54a7dee1d8 Extracting [==================================================>] 1.196MB/1.196MB ed54a7dee1d8 Extracting [==================================================>] 1.196MB/1.196MB ed54a7dee1d8 Pull complete 12c5c803443f Extracting [==================================================>] 116B/116B 12c5c803443f Extracting [==================================================>] 116B/116B c49e0ee60bfb Extracting [================================================> ] 103.1MB/107.3MB f3a82e9f1761 Extracting [> ] 458.8kB/44.41MB 09d5a3f70313 Extracting [====> ] 10.03MB/109.2MB 12c5c803443f Pull complete f3a82e9f1761 Extracting [===========> ] 10.55MB/44.41MB e27c75a98748 Extracting [==================================================>] 3.144kB/3.144kB e27c75a98748 Extracting [==================================================>] 3.144kB/3.144kB 09d5a3f70313 Extracting [=========> ] 21.17MB/109.2MB c49e0ee60bfb Extracting [================================================> ] 104.2MB/107.3MB f3a82e9f1761 Extracting [===========================> ] 24.31MB/44.41MB 09d5a3f70313 Extracting [=============> ] 28.97MB/109.2MB c49e0ee60bfb Extracting [=================================================> ] 105.8MB/107.3MB e27c75a98748 Pull complete c49e0ee60bfb Extracting [==================================================>] 107.3MB/107.3MB f3a82e9f1761 Extracting [======================================> ] 33.95MB/44.41MB c49e0ee60bfb Pull complete 09d5a3f70313 Extracting [===================> ] 42.89MB/109.2MB e73cb4a42719 Extracting [> ] 557.1kB/109.1MB f3a82e9f1761 Extracting [=================================================> ] 44.04MB/44.41MB f3a82e9f1761 Extracting [==================================================>] 44.41MB/44.41MB 09d5a3f70313 Extracting [==========================> ] 56.82MB/109.2MB f3a82e9f1761 Pull complete 384497dbce3b Extracting [> ] 557.1kB/63.48MB 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB e73cb4a42719 Extracting [==> ] 4.456MB/109.1MB 09d5a3f70313 Extracting [==============================> ] 65.73MB/109.2MB e73cb4a42719 Extracting [===> ] 7.799MB/109.1MB 384497dbce3b Extracting [> ] 1.114MB/63.48MB 79161a3f5362 Pull complete 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB 09d5a3f70313 Extracting [==================================> ] 75.76MB/109.2MB 384497dbce3b Extracting [=> ] 1.671MB/63.48MB e73cb4a42719 Extracting [=====> ] 11.7MB/109.1MB 09d5a3f70313 Extracting [=======================================> ] 86.9MB/109.2MB 9c266ba63f51 Pull complete 2e8a7df9c2ee Extracting [==================================================>] 851B/851B 2e8a7df9c2ee Extracting [==================================================>] 851B/851B e73cb4a42719 Extracting [=======> ] 16.15MB/109.1MB 09d5a3f70313 Extracting [===========================================> ] 94.14MB/109.2MB 384497dbce3b Extracting [==> ] 2.785MB/63.48MB e73cb4a42719 Extracting [========> ] 18.94MB/109.1MB 2e8a7df9c2ee Pull complete 10f05dd8b1db Extracting [==================================================>] 98B/98B 10f05dd8b1db Extracting [==================================================>] 98B/98B 09d5a3f70313 Extracting [==============================================> ] 102.5MB/109.2MB e73cb4a42719 Extracting [==========> ] 23.4MB/109.1MB 384497dbce3b Extracting [===> ] 4.456MB/63.48MB 09d5a3f70313 Extracting [================================================> ] 105.3MB/109.2MB 10f05dd8b1db Pull complete 41dac8b43ba6 Extracting [==================================================>] 171B/171B 41dac8b43ba6 Extracting [==================================================>] 171B/171B 384497dbce3b Extracting [===> ] 5.014MB/63.48MB e73cb4a42719 Extracting [============> ] 26.18MB/109.1MB 09d5a3f70313 Extracting [=================================================> ] 108.1MB/109.2MB 09d5a3f70313 Extracting [==================================================>] 109.2MB/109.2MB 09d5a3f70313 Extracting [==================================================>] 109.2MB/109.2MB e73cb4a42719 Extracting [==============> ] 31.75MB/109.1MB 41dac8b43ba6 Pull complete 09d5a3f70313 Pull complete 71a9f6a9ab4d Extracting [=======> ] 32.77kB/230.6kB 384497dbce3b Extracting [======> ] 7.799MB/63.48MB 356f5c2c843b Extracting [==================================================>] 3.623kB/3.623kB 356f5c2c843b Extracting [==================================================>] 3.623kB/3.623kB e73cb4a42719 Extracting [================> ] 35.65MB/109.1MB 71a9f6a9ab4d Extracting [==================================================>] 230.6kB/230.6kB 384497dbce3b Extracting [=======> ] 9.47MB/63.48MB 71a9f6a9ab4d Pull complete 356f5c2c843b Pull complete kafka Pulled e73cb4a42719 Extracting [==================> ] 40.11MB/109.1MB 384497dbce3b Extracting [========> ] 11.14MB/63.48MB da3ed5db7103 Extracting [> ] 557.1kB/127.4MB e73cb4a42719 Extracting [=====================> ] 46.24MB/109.1MB 384497dbce3b Extracting [==========> ] 12.81MB/63.48MB da3ed5db7103 Extracting [====> ] 10.58MB/127.4MB e73cb4a42719 Extracting [=======================> ] 51.25MB/109.1MB 384497dbce3b Extracting [============> ] 16.15MB/63.48MB da3ed5db7103 Extracting [=======> ] 20.05MB/127.4MB e73cb4a42719 Extracting [========================> ] 53.48MB/109.1MB 384497dbce3b Extracting [=============> ] 17.27MB/63.48MB da3ed5db7103 Extracting [===========> ] 28.41MB/127.4MB e73cb4a42719 Extracting [=========================> ] 56.26MB/109.1MB 384497dbce3b Extracting [================> ] 20.61MB/63.48MB da3ed5db7103 Extracting [==============> ] 37.88MB/127.4MB e73cb4a42719 Extracting [===========================> ] 60.16MB/109.1MB da3ed5db7103 Extracting [===================> ] 50.69MB/127.4MB 384497dbce3b Extracting [==================> ] 23.95MB/63.48MB e73cb4a42719 Extracting [===============================> ] 67.96MB/109.1MB da3ed5db7103 Extracting [==========================> ] 66.29MB/127.4MB 384497dbce3b Extracting [=====================> ] 27.3MB/63.48MB e73cb4a42719 Extracting [==================================> ] 74.65MB/109.1MB da3ed5db7103 Extracting [==============================> ] 77.99MB/127.4MB 384497dbce3b Extracting [========================> ] 31.2MB/63.48MB e73cb4a42719 Extracting [====================================> ] 79.1MB/109.1MB da3ed5db7103 Extracting [====================================> ] 94.14MB/127.4MB 384497dbce3b Extracting [==========================> ] 33.42MB/63.48MB e73cb4a42719 Extracting [======================================> ] 83MB/109.1MB da3ed5db7103 Extracting [==========================================> ] 109.2MB/127.4MB 384497dbce3b Extracting [============================> ] 36.21MB/63.48MB e73cb4a42719 Extracting [========================================> ] 87.46MB/109.1MB da3ed5db7103 Extracting [==============================================> ] 119.8MB/127.4MB 384497dbce3b Extracting [===============================> ] 40.11MB/63.48MB e73cb4a42719 Extracting [=========================================> ] 91.36MB/109.1MB da3ed5db7103 Extracting [================================================> ] 123.7MB/127.4MB 384497dbce3b Extracting [==================================> ] 44.01MB/63.48MB da3ed5db7103 Extracting [==================================================>] 127.4MB/127.4MB e73cb4a42719 Extracting [==========================================> ] 93.59MB/109.1MB da3ed5db7103 Pull complete c955f6e31a04 Extracting [==================================================>] 3.446kB/3.446kB c955f6e31a04 Extracting [==================================================>] 3.446kB/3.446kB 384497dbce3b Extracting [=====================================> ] 47.35MB/63.48MB e73cb4a42719 Extracting [============================================> ] 96.93MB/109.1MB c955f6e31a04 Pull complete 384497dbce3b Extracting [=======================================> ] 50.14MB/63.48MB zookeeper Pulled e73cb4a42719 Extracting [=============================================> ] 99.16MB/109.1MB 384497dbce3b Extracting [=========================================> ] 52.36MB/63.48MB e73cb4a42719 Extracting [==============================================> ] 102.5MB/109.1MB 384497dbce3b Extracting [=============================================> ] 57.93MB/63.48MB e73cb4a42719 Extracting [================================================> ] 104.7MB/109.1MB e73cb4a42719 Extracting [=================================================> ] 107MB/109.1MB 384497dbce3b Extracting [==============================================> ] 59.6MB/63.48MB 384497dbce3b Extracting [==================================================>] 63.48MB/63.48MB e73cb4a42719 Extracting [=================================================> ] 108.1MB/109.1MB e73cb4a42719 Extracting [==================================================>] 109.1MB/109.1MB 384497dbce3b Pull complete 055b9255fa03 Extracting [==================================================>] 11.92kB/11.92kB 055b9255fa03 Extracting [==================================================>] 11.92kB/11.92kB e73cb4a42719 Pull complete a83b68436f09 Extracting [==================================================>] 9.919kB/9.919kB a83b68436f09 Extracting [==================================================>] 9.919kB/9.919kB 055b9255fa03 Pull complete b176d7edde70 Extracting [==================================================>] 1.227kB/1.227kB b176d7edde70 Extracting [==================================================>] 1.227kB/1.227kB a83b68436f09 Pull complete 787d6bee9571 Extracting [==================================================>] 127B/127B 787d6bee9571 Extracting [==================================================>] 127B/127B b176d7edde70 Pull complete 787d6bee9571 Pull complete 13ff0988aaea Extracting [==================================================>] 167B/167B 13ff0988aaea Extracting [==================================================>] 167B/167B grafana Pulled 13ff0988aaea Pull complete 4b82842ab819 Extracting [==================================================>] 5.415kB/5.415kB 4b82842ab819 Extracting [==================================================>] 5.415kB/5.415kB 4b82842ab819 Pull complete 7e568a0dc8fb Extracting [==================================================>] 184B/184B 7e568a0dc8fb Extracting [==================================================>] 184B/184B 7e568a0dc8fb Pull complete postgres Pulled Network compose_default Creating Network compose_default Created Container zookeeper Creating Container prometheus Creating Container simulator Creating Container postgres Creating Container simulator Created Container prometheus Created Container grafana Creating Container postgres Created Container policy-db-migrator Creating Container zookeeper Created Container kafka Creating Container grafana Created Container policy-db-migrator Created Container policy-api Creating Container kafka Created Container policy-api Created Container policy-pap Creating Container policy-pap Created Container policy-apex-pdp Creating Container policy-apex-pdp Created Container prometheus Starting Container simulator Starting Container postgres Starting Container zookeeper Starting Container zookeeper Started Container kafka Starting Container kafka Started Container simulator Started Container postgres Started Container policy-db-migrator Starting Container policy-db-migrator Started Container policy-api Starting Container policy-api Started Container policy-pap Starting Container policy-pap Started Container policy-apex-pdp Starting Container prometheus Started Container grafana Starting Container grafana Started Container policy-apex-pdp Started Prometheus server: http://localhost:30259 Grafana server: http://localhost:30269 Waiting 1 minute for policy-pap to start... Checking if REST port 30003 is open on localhost ... IMAGE NAMES STATUS nexus3.onap.org:10001/onap/policy-apex-pdp:4.2.1-SNAPSHOT policy-apex-pdp Up About a minute nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up About a minute nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up About a minute nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up About a minute nexus3.onap.org:10001/grafana/grafana:latest grafana Up About a minute nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up About a minute nexus3.onap.org:10001/onap/policy-models-simulator:4.2.1-SNAPSHOT simulator Up About a minute nexus3.onap.org:10001/library/postgres:16.4 postgres Up About a minute nexus3.onap.org:10001/prom/prometheus:latest prometheus Up About a minute Checking if REST port 30001 is open on localhost ... IMAGE NAMES STATUS nexus3.onap.org:10001/onap/policy-apex-pdp:4.2.1-SNAPSHOT policy-apex-pdp Up About a minute nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up About a minute nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up About a minute nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up About a minute nexus3.onap.org:10001/grafana/grafana:latest grafana Up About a minute nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up About a minute nexus3.onap.org:10001/onap/policy-models-simulator:4.2.1-SNAPSHOT simulator Up About a minute nexus3.onap.org:10001/library/postgres:16.4 postgres Up About a minute nexus3.onap.org:10001/prom/prometheus:latest prometheus Up About a minute Cloning into '/w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests/models'... Building robot framework docker image sha256:f8a5fac4dfc3b003bebd0350d9675a3bf9581a4b9d70773081472442ea71c575 top - 14:58:21 up 4 min, 0 users, load average: 1.78, 1.53, 0.68 Tasks: 234 total, 1 running, 155 sleeping, 0 stopped, 0 zombie %Cpu(s): 13.8 us, 3.3 sy, 0.0 ni, 79.1 id, 3.6 wa, 0.0 hi, 0.1 si, 0.1 st total used free shared buff/cache available Mem: 31G 2.6G 20G 28M 8.1G 28G Swap: 1.0G 0B 1.0G IMAGE NAMES STATUS nexus3.onap.org:10001/onap/policy-apex-pdp:4.2.1-SNAPSHOT policy-apex-pdp Up 2 minutes nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up 2 minutes nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up 2 minutes nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up 2 minutes nexus3.onap.org:10001/grafana/grafana:latest grafana Up 2 minutes nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up 2 minutes nexus3.onap.org:10001/onap/policy-models-simulator:4.2.1-SNAPSHOT simulator Up 2 minutes nexus3.onap.org:10001/library/postgres:16.4 postgres Up 2 minutes nexus3.onap.org:10001/prom/prometheus:latest prometheus Up 2 minutes CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS a969725828ff policy-apex-pdp 1.43% 224MiB / 31.41GiB 0.70% 49.8kB / 65.4kB 0B / 0B 52 d6c708b33f6e policy-pap 1.15% 501.4MiB / 31.41GiB 1.56% 131kB / 182kB 0B / 139MB 67 d350609aa786 policy-api 0.11% 415.6MiB / 31.41GiB 1.29% 1.15MB / 1.02MB 0B / 0B 59 b95edc23cf5d kafka 2.82% 377.3MiB / 31.41GiB 1.17% 202kB / 184kB 0B / 590kB 83 c21121185922 grafana 0.19% 109.3MiB / 31.41GiB 0.34% 19.1MB / 168kB 0B / 31.7MB 22 8fad55994d0e zookeeper 0.29% 85.1MiB / 31.41GiB 0.26% 53.9kB / 44.8kB 229kB / 426kB 62 e438a4763faf simulator 0.06% 122.9MiB / 31.41GiB 0.38% 1.43kB / 0B 0B / 0B 64 e500c1df0f30 postgres 0.02% 84.34MiB / 31.41GiB 0.26% 1.67MB / 1.73MB 0B / 160MB 26 6a27133036a8 prometheus 0.31% 20.62MiB / 31.41GiB 0.06% 98.1kB / 5.27kB 0B / 53.2kB 13 Container policy-csit Creating Container policy-csit Created Attaching to policy-csit policy-csit | Invoking the robot tests from: pap-test.robot pap-slas.robot policy-csit | Run Robot test policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates policy-csit | -v POLICY_API_IP:policy-api:6969 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 policy-csit | -v APEX_IP:policy-apex-pdp:6969 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 policy-csit | -v KAFKA_IP:kafka:9092 policy-csit | -v PROMETHEUS_IP:prometheus:9090 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 policy-csit | -v POLICY_OPA_IP:policy-opa-pdp:8282 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 policy-csit | -v TEMP_FOLDER:/tmp/distribution policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 policy-csit | -v TEST_ENV:docker policy-csit | -v JAEGER_IP:jaeger:16686 policy-csit | Starting Robot test suites ... policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas.Pap-Test policy-csit | ============================================================================== policy-csit | LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | LoadNodeTemplates :: Create node templates in database using speci... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Healthcheck :: Verify policy pap health check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Consolidated Healthcheck :: Verify policy consolidated health check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Metrics :: Verify policy pap is exporting prometheus metrics | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | DeployPdpGroups :: Deploy policies in PdpGroups | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Pap-Test & Pap-Slas.Pap-Test | PASS | policy-csit | 22 tests, 22 passed, 0 failed policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas.Pap-Slas policy-csit | ============================================================================== policy-csit | WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Pap-Test & Pap-Slas.Pap-Slas | PASS | policy-csit | 8 tests, 8 passed, 0 failed policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas | PASS | policy-csit | 30 tests, 30 passed, 0 failed policy-csit | ============================================================================== policy-csit | Output: /tmp/results/output.xml policy-csit | Log: /tmp/results/log.html policy-csit | Report: /tmp/results/report.html policy-csit | RESULT: 0 policy-csit exited with code 0 IMAGE NAMES STATUS nexus3.onap.org:10001/onap/policy-apex-pdp:4.2.1-SNAPSHOT policy-apex-pdp Up 3 minutes nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up 3 minutes nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up 3 minutes nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up 3 minutes nexus3.onap.org:10001/grafana/grafana:latest grafana Up 3 minutes nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up 3 minutes nexus3.onap.org:10001/onap/policy-models-simulator:4.2.1-SNAPSHOT simulator Up 3 minutes nexus3.onap.org:10001/library/postgres:16.4 postgres Up 3 minutes nexus3.onap.org:10001/prom/prometheus:latest prometheus Up 3 minutes Shut down started! Collecting logs from docker compose containers... grafana | logger=settings t=2025-06-16T14:56:19.494357566Z level=info msg="Starting Grafana" version=12.0.1+security-01 commit=ff20b06681749873999bb0a8e365f24fddaee33f branch=HEAD compiled=2025-06-16T14:56:19Z grafana | logger=settings t=2025-06-16T14:56:19.49465562Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini grafana | logger=settings t=2025-06-16T14:56:19.49466738Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini grafana | logger=settings t=2025-06-16T14:56:19.49467151Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" grafana | logger=settings t=2025-06-16T14:56:19.49467519Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" grafana | logger=settings t=2025-06-16T14:56:19.49467837Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" grafana | logger=settings t=2025-06-16T14:56:19.49468115Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" grafana | logger=settings t=2025-06-16T14:56:19.49468507Z level=info msg="Config overridden from command line" arg="default.log.mode=console" grafana | logger=settings t=2025-06-16T14:56:19.49468837Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" grafana | logger=settings t=2025-06-16T14:56:19.49469192Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" grafana | logger=settings t=2025-06-16T14:56:19.49469609Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" grafana | logger=settings t=2025-06-16T14:56:19.49470002Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" grafana | logger=settings t=2025-06-16T14:56:19.49470425Z level=info msg=Target target=[all] grafana | logger=settings t=2025-06-16T14:56:19.49471751Z level=info msg="Path Home" path=/usr/share/grafana grafana | logger=settings t=2025-06-16T14:56:19.494742071Z level=info msg="Path Data" path=/var/lib/grafana grafana | logger=settings t=2025-06-16T14:56:19.494745781Z level=info msg="Path Logs" path=/var/log/grafana grafana | logger=settings t=2025-06-16T14:56:19.494748931Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins grafana | logger=settings t=2025-06-16T14:56:19.494752281Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning grafana | logger=settings t=2025-06-16T14:56:19.494755441Z level=info msg="App mode production" grafana | logger=featuremgmt t=2025-06-16T14:56:19.495075875Z level=info msg=FeatureToggles alertingApiServer=true ssoSettingsSAML=true logsExploreTableVisualisation=true reportingUseRawTimeRange=true prometheusAzureOverrideAudience=true annotationPermissionUpdate=true alertingQueryAndExpressionsStepMode=true azureMonitorEnableUserAuth=true lokiStructuredMetadata=true dashboardSceneForViewers=true externalCorePlugins=true panelMonitoring=true grafanaconThemes=true tlsMemcached=true dashboardScene=true angularDeprecationUI=true useSessionStorageForRedirection=true lokiLabelNamesQueryApi=true formatString=true correlations=true alertingRulePermanentlyDelete=true influxdbBackendMigration=true publicDashboardsScene=true unifiedStorageSearchPermissionFiltering=true awsAsyncQueryCaching=true groupToNestedTableTransformation=true lokiQuerySplitting=true cloudWatchRoundUpEndTime=true onPremToCloudMigrations=true addFieldFromCalculationStatFunctions=true azureMonitorPrometheusExemplars=true lokiQueryHints=true newPDFRendering=true logsPanelControls=true dataplaneFrontendFallback=true cloudWatchCrossAccountQuerying=true prometheusUsesCombobox=true pinNavItems=true ssoSettingsApi=true recordedQueriesMulti=true kubernetesPlaylists=true pluginsDetailsRightPanel=true alertingNotificationsStepMode=true preinstallAutoUpdate=true promQLScope=true newFiltersUI=true alertingSimplifiedRouting=true cloudWatchNewLabelParsing=true newDashboardSharingComponent=true failWrongDSUID=true alertingUIOptimizeReducer=true logRowsPopoverMenu=true alertingInsights=true dashgpt=true alertRuleRestore=true alertingRuleVersionHistoryRestore=true kubernetesClientDashboardsFolders=true recoveryThreshold=true logsInfiniteScrolling=true unifiedRequestLog=true dashboardSceneSolo=true alertingRuleRecoverDeleted=true logsContextDatasourceUi=true transformationsRedesign=true nestedFolders=true grafana | logger=sqlstore t=2025-06-16T14:56:19.495147025Z level=info msg="Connecting to DB" dbtype=sqlite3 grafana | logger=sqlstore t=2025-06-16T14:56:19.495160486Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db grafana | logger=migrator t=2025-06-16T14:56:19.496689984Z level=info msg="Locking database" grafana | logger=migrator t=2025-06-16T14:56:19.496704134Z level=info msg="Starting DB migrations" grafana | logger=migrator t=2025-06-16T14:56:19.497341751Z level=info msg="Executing migration" id="create migration_log table" grafana | logger=migrator t=2025-06-16T14:56:19.498184211Z level=info msg="Migration successfully executed" id="create migration_log table" duration=842.31µs grafana | logger=migrator t=2025-06-16T14:56:19.503502784Z level=info msg="Executing migration" id="create user table" grafana | logger=migrator t=2025-06-16T14:56:19.504192703Z level=info msg="Migration successfully executed" id="create user table" duration=690.109µs grafana | logger=migrator t=2025-06-16T14:56:19.507762625Z level=info msg="Executing migration" id="add unique index user.login" grafana | logger=migrator t=2025-06-16T14:56:19.508530284Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=767.249µs grafana | logger=migrator t=2025-06-16T14:56:19.512048266Z level=info msg="Executing migration" id="add unique index user.email" grafana | logger=migrator t=2025-06-16T14:56:19.512752894Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=704.308µs grafana | logger=migrator t=2025-06-16T14:56:19.519086249Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" grafana | logger=migrator t=2025-06-16T14:56:19.520315404Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=1.228645ms grafana | logger=migrator t=2025-06-16T14:56:19.525536556Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" grafana | logger=migrator t=2025-06-16T14:56:19.52673631Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=1.199654ms grafana | logger=migrator t=2025-06-16T14:56:19.530443514Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" grafana | logger=migrator t=2025-06-16T14:56:19.533237557Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=2.794393ms grafana | logger=migrator t=2025-06-16T14:56:19.536738359Z level=info msg="Executing migration" id="create user table v2" grafana | logger=migrator t=2025-06-16T14:56:19.537777141Z level=info msg="Migration successfully executed" id="create user table v2" duration=1.038112ms grafana | logger=migrator t=2025-06-16T14:56:19.543086884Z level=info msg="Executing migration" id="create index UQE_user_login - v2" grafana | logger=migrator t=2025-06-16T14:56:19.543808022Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=720.379µs grafana | logger=migrator t=2025-06-16T14:56:19.547126652Z level=info msg="Executing migration" id="create index UQE_user_email - v2" grafana | logger=migrator t=2025-06-16T14:56:19.547763009Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=636.818µs grafana | logger=migrator t=2025-06-16T14:56:19.55125556Z level=info msg="Executing migration" id="copy data_source v1 to v2" grafana | logger=migrator t=2025-06-16T14:56:19.551807417Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=551.677µs grafana | logger=migrator t=2025-06-16T14:56:19.556719745Z level=info msg="Executing migration" id="Drop old table user_v1" grafana | logger=migrator t=2025-06-16T14:56:19.557174551Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=452.426µs grafana | logger=migrator t=2025-06-16T14:56:19.560291488Z level=info msg="Executing migration" id="Add column help_flags1 to user table" grafana | logger=migrator t=2025-06-16T14:56:19.561644854Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.352507ms grafana | logger=migrator t=2025-06-16T14:56:19.565178126Z level=info msg="Executing migration" id="Update user table charset" grafana | logger=migrator t=2025-06-16T14:56:19.565220836Z level=info msg="Migration successfully executed" id="Update user table charset" duration=44.56µs grafana | logger=migrator t=2025-06-16T14:56:19.568884639Z level=info msg="Executing migration" id="Add last_seen_at column to user" grafana | logger=migrator t=2025-06-16T14:56:19.570755621Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.869992ms grafana | logger=migrator t=2025-06-16T14:56:19.579117201Z level=info msg="Executing migration" id="Add missing user data" grafana | logger=migrator t=2025-06-16T14:56:19.579324473Z level=info msg="Migration successfully executed" id="Add missing user data" duration=208.422µs grafana | logger=migrator t=2025-06-16T14:56:19.582879445Z level=info msg="Executing migration" id="Add is_disabled column to user" grafana | logger=migrator t=2025-06-16T14:56:19.587242687Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=4.361962ms grafana | logger=migrator t=2025-06-16T14:56:19.5909162Z level=info msg="Executing migration" id="Add index user.login/user.email" grafana | logger=migrator t=2025-06-16T14:56:19.591788771Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=872.911µs grafana | logger=migrator t=2025-06-16T14:56:19.594612224Z level=info msg="Executing migration" id="Add is_service_account column to user" grafana | logger=migrator t=2025-06-16T14:56:19.595743018Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.130284ms grafana | logger=migrator t=2025-06-16T14:56:19.600243531Z level=info msg="Executing migration" id="Update is_service_account column to nullable" grafana | logger=migrator t=2025-06-16T14:56:19.608526049Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=8.281788ms grafana | logger=migrator t=2025-06-16T14:56:19.611850489Z level=info msg="Executing migration" id="Add uid column to user" grafana | logger=migrator t=2025-06-16T14:56:19.613140464Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=1.289495ms grafana | logger=migrator t=2025-06-16T14:56:19.616407123Z level=info msg="Executing migration" id="Update uid column values for users" grafana | logger=migrator t=2025-06-16T14:56:19.616769317Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=361.594µs grafana | logger=migrator t=2025-06-16T14:56:19.620796244Z level=info msg="Executing migration" id="Add unique index user_uid" grafana | logger=migrator t=2025-06-16T14:56:19.621770866Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=973.902µs grafana | logger=migrator t=2025-06-16T14:56:19.626510162Z level=info msg="Executing migration" id="Add is_provisioned column to user" grafana | logger=migrator t=2025-06-16T14:56:19.628318374Z level=info msg="Migration successfully executed" id="Add is_provisioned column to user" duration=1.806212ms grafana | logger=migrator t=2025-06-16T14:56:19.633625277Z level=info msg="Executing migration" id="update login field with orgid to allow for multiple service accounts with same name across orgs" grafana | logger=migrator t=2025-06-16T14:56:19.634167793Z level=info msg="Migration successfully executed" id="update login field with orgid to allow for multiple service accounts with same name across orgs" duration=541.676µs grafana | logger=migrator t=2025-06-16T14:56:19.637923628Z level=info msg="Executing migration" id="update service accounts login field orgid to appear only once" grafana | logger=migrator t=2025-06-16T14:56:19.638557805Z level=info msg="Migration successfully executed" id="update service accounts login field orgid to appear only once" duration=633.987µs grafana | logger=migrator t=2025-06-16T14:56:19.643229791Z level=info msg="Executing migration" id="update login and email fields to lowercase" grafana | logger=migrator t=2025-06-16T14:56:19.643808917Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase" duration=578.766µs grafana | logger=migrator t=2025-06-16T14:56:19.647309679Z level=info msg="Executing migration" id="update login and email fields to lowercase2" grafana | logger=migrator t=2025-06-16T14:56:19.647880545Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase2" duration=570.306µs grafana | logger=migrator t=2025-06-16T14:56:19.651217895Z level=info msg="Executing migration" id="create temp user table v1-7" grafana | logger=migrator t=2025-06-16T14:56:19.65245544Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=1.236945ms grafana | logger=migrator t=2025-06-16T14:56:19.655906061Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" grafana | logger=migrator t=2025-06-16T14:56:19.657061004Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=1.156093ms grafana | logger=migrator t=2025-06-16T14:56:19.661422196Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" grafana | logger=migrator t=2025-06-16T14:56:19.662121184Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=698.828µs grafana | logger=migrator t=2025-06-16T14:56:19.665077069Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" grafana | logger=migrator t=2025-06-16T14:56:19.665792698Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=715.489µs grafana | logger=migrator t=2025-06-16T14:56:19.668685912Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" grafana | logger=migrator t=2025-06-16T14:56:19.669562902Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=876.41µs grafana | logger=migrator t=2025-06-16T14:56:19.674349699Z level=info msg="Executing migration" id="Update temp_user table charset" grafana | logger=migrator t=2025-06-16T14:56:19.67437482Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=25.991µs grafana | logger=migrator t=2025-06-16T14:56:19.676818178Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" grafana | logger=migrator t=2025-06-16T14:56:19.677462266Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=643.778µs grafana | logger=migrator t=2025-06-16T14:56:19.68030809Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" grafana | logger=migrator t=2025-06-16T14:56:19.680963548Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=655.018µs grafana | logger=migrator t=2025-06-16T14:56:19.686599485Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" grafana | logger=migrator t=2025-06-16T14:56:19.687445755Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=845.95µs grafana | logger=migrator t=2025-06-16T14:56:19.691654974Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" grafana | logger=migrator t=2025-06-16T14:56:19.692681407Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=1.024013ms grafana | logger=migrator t=2025-06-16T14:56:19.696029406Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-16T14:56:19.701117537Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=5.088231ms grafana | logger=migrator t=2025-06-16T14:56:19.705737311Z level=info msg="Executing migration" id="create temp_user v2" grafana | logger=migrator t=2025-06-16T14:56:19.706527431Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=789.67µs grafana | logger=migrator t=2025-06-16T14:56:19.709565807Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" grafana | logger=migrator t=2025-06-16T14:56:19.710272565Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=706.058µs grafana | logger=migrator t=2025-06-16T14:56:19.713503213Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" grafana | logger=migrator t=2025-06-16T14:56:19.714178441Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=675.018µs grafana | logger=migrator t=2025-06-16T14:56:19.719738467Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" grafana | logger=migrator t=2025-06-16T14:56:19.720443586Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=704.949µs grafana | logger=migrator t=2025-06-16T14:56:19.723821176Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" grafana | logger=migrator t=2025-06-16T14:56:19.725293193Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=1.470868ms grafana | logger=migrator t=2025-06-16T14:56:19.72926955Z level=info msg="Executing migration" id="copy temp_user v1 to v2" grafana | logger=migrator t=2025-06-16T14:56:19.729879677Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=609.707µs grafana | logger=migrator t=2025-06-16T14:56:19.733260928Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" grafana | logger=migrator t=2025-06-16T14:56:19.733752623Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=490.936µs grafana | logger=migrator t=2025-06-16T14:56:19.739262059Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" grafana | logger=migrator t=2025-06-16T14:56:19.739585593Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=323.323µs grafana | logger=migrator t=2025-06-16T14:56:19.743538269Z level=info msg="Executing migration" id="create star table" grafana | logger=migrator t=2025-06-16T14:56:19.744585662Z level=info msg="Migration successfully executed" id="create star table" duration=1.047143ms grafana | logger=migrator t=2025-06-16T14:56:19.747906551Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" grafana | logger=migrator t=2025-06-16T14:56:19.749163986Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=1.257075ms grafana | logger=migrator t=2025-06-16T14:56:19.754360418Z level=info msg="Executing migration" id="Add column dashboard_uid in star" grafana | logger=migrator t=2025-06-16T14:56:19.755931476Z level=info msg="Migration successfully executed" id="Add column dashboard_uid in star" duration=1.571168ms grafana | logger=migrator t=2025-06-16T14:56:19.758971212Z level=info msg="Executing migration" id="Add column org_id in star" grafana | logger=migrator t=2025-06-16T14:56:19.760314638Z level=info msg="Migration successfully executed" id="Add column org_id in star" duration=1.343356ms grafana | logger=migrator t=2025-06-16T14:56:19.763119722Z level=info msg="Executing migration" id="Add column updated in star" grafana | logger=migrator t=2025-06-16T14:56:19.764602879Z level=info msg="Migration successfully executed" id="Add column updated in star" duration=1.482107ms grafana | logger=migrator t=2025-06-16T14:56:19.767568294Z level=info msg="Executing migration" id="add index in star table on dashboard_uid, org_id and user_id columns" grafana | logger=migrator t=2025-06-16T14:56:19.768296063Z level=info msg="Migration successfully executed" id="add index in star table on dashboard_uid, org_id and user_id columns" duration=726.989µs grafana | logger=migrator t=2025-06-16T14:56:19.772743055Z level=info msg="Executing migration" id="create org table v1" grafana | logger=migrator t=2025-06-16T14:56:19.773507124Z level=info msg="Migration successfully executed" id="create org table v1" duration=763.739µs grafana | logger=migrator t=2025-06-16T14:56:19.776780063Z level=info msg="Executing migration" id="create index UQE_org_name - v1" grafana | logger=migrator t=2025-06-16T14:56:19.777651664Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=871.191µs grafana | logger=migrator t=2025-06-16T14:56:19.780999573Z level=info msg="Executing migration" id="create org_user table v1" grafana | logger=migrator t=2025-06-16T14:56:19.781738772Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=739.599µs grafana | logger=migrator t=2025-06-16T14:56:19.785078682Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" grafana | logger=migrator t=2025-06-16T14:56:19.7857795Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=700.468µs grafana | logger=migrator t=2025-06-16T14:56:19.792742513Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" grafana | logger=migrator t=2025-06-16T14:56:19.793926977Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=1.183864ms grafana | logger=migrator t=2025-06-16T14:56:19.797699481Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" grafana | logger=migrator t=2025-06-16T14:56:19.798789044Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=1.092453ms grafana | logger=migrator t=2025-06-16T14:56:19.802129624Z level=info msg="Executing migration" id="Update org table charset" grafana | logger=migrator t=2025-06-16T14:56:19.802155904Z level=info msg="Migration successfully executed" id="Update org table charset" duration=26.98µs grafana | logger=migrator t=2025-06-16T14:56:19.804467872Z level=info msg="Executing migration" id="Update org_user table charset" grafana | logger=migrator t=2025-06-16T14:56:19.804494022Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=26.6µs grafana | logger=migrator t=2025-06-16T14:56:19.807647599Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" grafana | logger=migrator t=2025-06-16T14:56:19.807900962Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=252.793µs grafana | logger=migrator t=2025-06-16T14:56:19.81277936Z level=info msg="Executing migration" id="create dashboard table" grafana | logger=migrator t=2025-06-16T14:56:19.814082566Z level=info msg="Migration successfully executed" id="create dashboard table" duration=1.301865ms grafana | logger=migrator t=2025-06-16T14:56:19.817551676Z level=info msg="Executing migration" id="add index dashboard.account_id" grafana | logger=migrator t=2025-06-16T14:56:19.819218856Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=1.66615ms grafana | logger=migrator t=2025-06-16T14:56:19.823070882Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" grafana | logger=migrator t=2025-06-16T14:56:19.82462382Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=1.592809ms grafana | logger=migrator t=2025-06-16T14:56:19.828142302Z level=info msg="Executing migration" id="create dashboard_tag table" grafana | logger=migrator t=2025-06-16T14:56:19.828924311Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=781.449µs grafana | logger=migrator t=2025-06-16T14:56:19.833511556Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" grafana | logger=migrator t=2025-06-16T14:56:19.834406426Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=894.57µs grafana | logger=migrator t=2025-06-16T14:56:19.837639435Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" grafana | logger=migrator t=2025-06-16T14:56:19.838497095Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=856.85µs grafana | logger=migrator t=2025-06-16T14:56:19.84147171Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" grafana | logger=migrator t=2025-06-16T14:56:19.84904021Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=7.56769ms grafana | logger=migrator t=2025-06-16T14:56:19.853366751Z level=info msg="Executing migration" id="create dashboard v2" grafana | logger=migrator t=2025-06-16T14:56:19.854244841Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=871.04µs grafana | logger=migrator t=2025-06-16T14:56:19.857423559Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" grafana | logger=migrator t=2025-06-16T14:56:19.858221279Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=797.75µs grafana | logger=migrator t=2025-06-16T14:56:19.861424927Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" grafana | logger=migrator t=2025-06-16T14:56:19.862278717Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=853.63µs grafana | logger=migrator t=2025-06-16T14:56:19.866841441Z level=info msg="Executing migration" id="copy dashboard v1 to v2" grafana | logger=migrator t=2025-06-16T14:56:19.867242666Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=403.835µs grafana | logger=migrator t=2025-06-16T14:56:19.870271391Z level=info msg="Executing migration" id="drop table dashboard_v1" grafana | logger=migrator t=2025-06-16T14:56:19.871150502Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=878.501µs grafana | logger=migrator t=2025-06-16T14:56:19.878066804Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" grafana | logger=migrator t=2025-06-16T14:56:19.878104674Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=82.651µs grafana | logger=migrator t=2025-06-16T14:56:19.883210025Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" grafana | logger=migrator t=2025-06-16T14:56:19.886132209Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=2.922474ms grafana | logger=migrator t=2025-06-16T14:56:19.890509551Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" grafana | logger=migrator t=2025-06-16T14:56:19.892283272Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.773561ms grafana | logger=migrator t=2025-06-16T14:56:19.895658572Z level=info msg="Executing migration" id="Add column gnetId in dashboard" grafana | logger=migrator t=2025-06-16T14:56:19.898347194Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=2.687002ms grafana | logger=migrator t=2025-06-16T14:56:19.903142311Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" grafana | logger=migrator t=2025-06-16T14:56:19.904278854Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=1.132363ms grafana | logger=migrator t=2025-06-16T14:56:19.90979882Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" grafana | logger=migrator t=2025-06-16T14:56:19.911637072Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.837532ms grafana | logger=migrator t=2025-06-16T14:56:19.914732888Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" grafana | logger=migrator t=2025-06-16T14:56:19.915892782Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=1.159104ms grafana | logger=migrator t=2025-06-16T14:56:19.919204811Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" grafana | logger=migrator t=2025-06-16T14:56:19.920382605Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=1.177294ms grafana | logger=migrator t=2025-06-16T14:56:19.924735817Z level=info msg="Executing migration" id="Update dashboard table charset" grafana | logger=migrator t=2025-06-16T14:56:19.924763297Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=27.79µs grafana | logger=migrator t=2025-06-16T14:56:19.928157947Z level=info msg="Executing migration" id="Update dashboard_tag table charset" grafana | logger=migrator t=2025-06-16T14:56:19.928186078Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=28.491µs grafana | logger=migrator t=2025-06-16T14:56:19.931650119Z level=info msg="Executing migration" id="Add column folder_id in dashboard" grafana | logger=migrator t=2025-06-16T14:56:19.934671345Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=3.039747ms grafana | logger=migrator t=2025-06-16T14:56:19.93932989Z level=info msg="Executing migration" id="Add column isFolder in dashboard" grafana | logger=migrator t=2025-06-16T14:56:19.942371676Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=3.041096ms grafana | logger=migrator t=2025-06-16T14:56:19.947194323Z level=info msg="Executing migration" id="Add column has_acl in dashboard" grafana | logger=migrator t=2025-06-16T14:56:19.949109606Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=1.914753ms grafana | logger=migrator t=2025-06-16T14:56:19.952589657Z level=info msg="Executing migration" id="Add column uid in dashboard" grafana | logger=migrator t=2025-06-16T14:56:19.954466679Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=1.876702ms grafana | logger=migrator t=2025-06-16T14:56:19.96130503Z level=info msg="Executing migration" id="Update uid column values in dashboard" grafana | logger=migrator t=2025-06-16T14:56:19.961619354Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=314.094µs grafana | logger=migrator t=2025-06-16T14:56:19.966707904Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" grafana | logger=migrator t=2025-06-16T14:56:19.967846338Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=1.137904ms grafana | logger=migrator t=2025-06-16T14:56:19.971196097Z level=info msg="Executing migration" id="Remove unique index org_id_slug" grafana | logger=migrator t=2025-06-16T14:56:19.971856355Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=660.068µs grafana | logger=migrator t=2025-06-16T14:56:19.975049953Z level=info msg="Executing migration" id="Update dashboard title length" grafana | logger=migrator t=2025-06-16T14:56:19.975072103Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=22.57µs grafana | logger=migrator t=2025-06-16T14:56:19.979361094Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" grafana | logger=migrator t=2025-06-16T14:56:19.980393586Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=1.035502ms grafana | logger=migrator t=2025-06-16T14:56:19.98325621Z level=info msg="Executing migration" id="create dashboard_provisioning" grafana | logger=migrator t=2025-06-16T14:56:19.983815337Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=561.717µs grafana | logger=migrator t=2025-06-16T14:56:19.986751522Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-16T14:56:19.990748599Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=3.996747ms grafana | logger=migrator t=2025-06-16T14:56:19.996566318Z level=info msg="Executing migration" id="create dashboard_provisioning v2" grafana | logger=migrator t=2025-06-16T14:56:19.997358948Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=792.029µs grafana | logger=migrator t=2025-06-16T14:56:20.001693039Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" grafana | logger=migrator t=2025-06-16T14:56:20.003378859Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=1.68861ms grafana | logger=migrator t=2025-06-16T14:56:20.009747294Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" grafana | logger=migrator t=2025-06-16T14:56:20.010759356Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=1.011862ms grafana | logger=migrator t=2025-06-16T14:56:20.014029845Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" grafana | logger=migrator t=2025-06-16T14:56:20.014714783Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=684.198µs grafana | logger=migrator t=2025-06-16T14:56:20.01948158Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" grafana | logger=migrator t=2025-06-16T14:56:20.02035242Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=867.92µs grafana | logger=migrator t=2025-06-16T14:56:20.026451452Z level=info msg="Executing migration" id="Add check_sum column" grafana | logger=migrator t=2025-06-16T14:56:20.030074715Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=3.621353ms grafana | logger=migrator t=2025-06-16T14:56:20.033783929Z level=info msg="Executing migration" id="Add index for dashboard_title" grafana | logger=migrator t=2025-06-16T14:56:20.03475604Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=972.401µs grafana | logger=migrator t=2025-06-16T14:56:20.040688841Z level=info msg="Executing migration" id="delete tags for deleted dashboards" grafana | logger=migrator t=2025-06-16T14:56:20.040996014Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=309.783µs grafana | logger=migrator t=2025-06-16T14:56:20.044150672Z level=info msg="Executing migration" id="delete stars for deleted dashboards" grafana | logger=migrator t=2025-06-16T14:56:20.044303343Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=152.751µs grafana | logger=migrator t=2025-06-16T14:56:20.048043688Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" grafana | logger=migrator t=2025-06-16T14:56:20.048804107Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=759.839µs grafana | logger=migrator t=2025-06-16T14:56:20.053689765Z level=info msg="Executing migration" id="Add isPublic for dashboard" grafana | logger=migrator t=2025-06-16T14:56:20.057408449Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=3.720174ms grafana | logger=migrator t=2025-06-16T14:56:20.065811568Z level=info msg="Executing migration" id="Add deleted for dashboard" grafana | logger=migrator t=2025-06-16T14:56:20.068460209Z level=info msg="Migration successfully executed" id="Add deleted for dashboard" duration=2.651051ms grafana | logger=migrator t=2025-06-16T14:56:20.076708537Z level=info msg="Executing migration" id="Add index for deleted" grafana | logger=migrator t=2025-06-16T14:56:20.077972992Z level=info msg="Migration successfully executed" id="Add index for deleted" duration=1.265605ms grafana | logger=migrator t=2025-06-16T14:56:20.081558225Z level=info msg="Executing migration" id="Add column dashboard_uid in dashboard_tag" grafana | logger=migrator t=2025-06-16T14:56:20.085992307Z level=info msg="Migration successfully executed" id="Add column dashboard_uid in dashboard_tag" duration=4.434953ms grafana | logger=migrator t=2025-06-16T14:56:20.089509709Z level=info msg="Executing migration" id="Add column org_id in dashboard_tag" grafana | logger=migrator t=2025-06-16T14:56:20.091142268Z level=info msg="Migration successfully executed" id="Add column org_id in dashboard_tag" duration=1.632489ms grafana | logger=migrator t=2025-06-16T14:56:20.096250378Z level=info msg="Executing migration" id="Add missing dashboard_uid and org_id to dashboard_tag" grafana | logger=migrator t=2025-06-16T14:56:20.096662343Z level=info msg="Migration successfully executed" id="Add missing dashboard_uid and org_id to dashboard_tag" duration=411.665µs grafana | logger=migrator t=2025-06-16T14:56:20.100661031Z level=info msg="Executing migration" id="Add apiVersion for dashboard" grafana | logger=migrator t=2025-06-16T14:56:20.104862501Z level=info msg="Migration successfully executed" id="Add apiVersion for dashboard" duration=4.20009ms grafana | logger=migrator t=2025-06-16T14:56:20.108351082Z level=info msg="Executing migration" id="Add index for dashboard_uid on dashboard_tag table" grafana | logger=migrator t=2025-06-16T14:56:20.109488455Z level=info msg="Migration successfully executed" id="Add index for dashboard_uid on dashboard_tag table" duration=1.138253ms grafana | logger=migrator t=2025-06-16T14:56:20.115250313Z level=info msg="Executing migration" id="Add missing dashboard_uid and org_id to star" grafana | logger=migrator t=2025-06-16T14:56:20.11575807Z level=info msg="Migration successfully executed" id="Add missing dashboard_uid and org_id to star" duration=507.546µs grafana | logger=migrator t=2025-06-16T14:56:20.118929387Z level=info msg="Executing migration" id="create data_source table" grafana | logger=migrator t=2025-06-16T14:56:20.119849528Z level=info msg="Migration successfully executed" id="create data_source table" duration=919.741µs grafana | logger=migrator t=2025-06-16T14:56:20.12423919Z level=info msg="Executing migration" id="add index data_source.account_id" grafana | logger=migrator t=2025-06-16T14:56:20.125194981Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=958.101µs grafana | logger=migrator t=2025-06-16T14:56:20.129113168Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" grafana | logger=migrator t=2025-06-16T14:56:20.130050009Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=936.691µs grafana | logger=migrator t=2025-06-16T14:56:20.135189299Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" grafana | logger=migrator t=2025-06-16T14:56:20.136417174Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=1.227175ms grafana | logger=migrator t=2025-06-16T14:56:20.139754024Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" grafana | logger=migrator t=2025-06-16T14:56:20.14117132Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=1.417106ms grafana | logger=migrator t=2025-06-16T14:56:20.146866458Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" grafana | logger=migrator t=2025-06-16T14:56:20.153617118Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=6.74998ms grafana | logger=migrator t=2025-06-16T14:56:20.158122421Z level=info msg="Executing migration" id="create data_source table v2" grafana | logger=migrator t=2025-06-16T14:56:20.158832449Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=709.908µs grafana | logger=migrator t=2025-06-16T14:56:20.161896966Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" grafana | logger=migrator t=2025-06-16T14:56:20.162710865Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=801.479µs grafana | logger=migrator t=2025-06-16T14:56:20.168224931Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" grafana | logger=migrator t=2025-06-16T14:56:20.169156932Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=931.671µs grafana | logger=migrator t=2025-06-16T14:56:20.172146307Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" grafana | logger=migrator t=2025-06-16T14:56:20.172781514Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=632.237µs grafana | logger=migrator t=2025-06-16T14:56:20.176898273Z level=info msg="Executing migration" id="Add column with_credentials" grafana | logger=migrator t=2025-06-16T14:56:20.179797518Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=2.898495ms grafana | logger=migrator t=2025-06-16T14:56:20.185383584Z level=info msg="Executing migration" id="Add secure json data column" grafana | logger=migrator t=2025-06-16T14:56:20.188877125Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=3.494081ms grafana | logger=migrator t=2025-06-16T14:56:20.192151684Z level=info msg="Executing migration" id="Update data_source table charset" grafana | logger=migrator t=2025-06-16T14:56:20.192297506Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=85.781µs grafana | logger=migrator t=2025-06-16T14:56:20.195439553Z level=info msg="Executing migration" id="Update initial version to 1" grafana | logger=migrator t=2025-06-16T14:56:20.195775807Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=335.774µs grafana | logger=migrator t=2025-06-16T14:56:20.201027089Z level=info msg="Executing migration" id="Add read_only data column" grafana | logger=migrator t=2025-06-16T14:56:20.203494078Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=2.466509ms grafana | logger=migrator t=2025-06-16T14:56:20.206516834Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" grafana | logger=migrator t=2025-06-16T14:56:20.206901399Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=383.855µs grafana | logger=migrator t=2025-06-16T14:56:20.209922674Z level=info msg="Executing migration" id="Update json_data with nulls" grafana | logger=migrator t=2025-06-16T14:56:20.210228488Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=305.124µs grafana | logger=migrator t=2025-06-16T14:56:20.212563935Z level=info msg="Executing migration" id="Add uid column" grafana | logger=migrator t=2025-06-16T14:56:20.215987136Z level=info msg="Migration successfully executed" id="Add uid column" duration=3.421491ms grafana | logger=migrator t=2025-06-16T14:56:20.221474691Z level=info msg="Executing migration" id="Update uid value" grafana | logger=migrator t=2025-06-16T14:56:20.221819175Z level=info msg="Migration successfully executed" id="Update uid value" duration=344.614µs grafana | logger=migrator t=2025-06-16T14:56:20.225176655Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" grafana | logger=migrator t=2025-06-16T14:56:20.226170857Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=993.772µs grafana | logger=migrator t=2025-06-16T14:56:20.23148773Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" grafana | logger=migrator t=2025-06-16T14:56:20.232662414Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=1.174414ms grafana | logger=migrator t=2025-06-16T14:56:20.239747588Z level=info msg="Executing migration" id="Add is_prunable column" grafana | logger=migrator t=2025-06-16T14:56:20.243938337Z level=info msg="Migration successfully executed" id="Add is_prunable column" duration=4.189759ms grafana | logger=migrator t=2025-06-16T14:56:20.247751092Z level=info msg="Executing migration" id="Add api_version column" grafana | logger=migrator t=2025-06-16T14:56:20.251113042Z level=info msg="Migration successfully executed" id="Add api_version column" duration=3.36242ms grafana | logger=migrator t=2025-06-16T14:56:20.25428074Z level=info msg="Executing migration" id="Update secure_json_data column to MediumText" grafana | logger=migrator t=2025-06-16T14:56:20.254379731Z level=info msg="Migration successfully executed" id="Update secure_json_data column to MediumText" duration=99.621µs grafana | logger=migrator t=2025-06-16T14:56:20.261511655Z level=info msg="Executing migration" id="create api_key table" grafana | logger=migrator t=2025-06-16T14:56:20.262359715Z level=info msg="Migration successfully executed" id="create api_key table" duration=848.15µs grafana | logger=migrator t=2025-06-16T14:56:20.266764987Z level=info msg="Executing migration" id="add index api_key.account_id" grafana | logger=migrator t=2025-06-16T14:56:20.268137684Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=1.372017ms grafana | logger=migrator t=2025-06-16T14:56:20.271573434Z level=info msg="Executing migration" id="add index api_key.key" grafana | logger=migrator t=2025-06-16T14:56:20.273093862Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=1.516338ms grafana | logger=migrator t=2025-06-16T14:56:20.276545073Z level=info msg="Executing migration" id="add index api_key.account_id_name" grafana | logger=migrator t=2025-06-16T14:56:20.277549585Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=1.004162ms grafana | logger=migrator t=2025-06-16T14:56:20.28306965Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" grafana | logger=migrator t=2025-06-16T14:56:20.28385853Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=785µs grafana | logger=migrator t=2025-06-16T14:56:20.287126338Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" grafana | logger=migrator t=2025-06-16T14:56:20.288626736Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=1.500358ms grafana | logger=migrator t=2025-06-16T14:56:20.294935111Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" grafana | logger=migrator t=2025-06-16T14:56:20.2957054Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=770.099µs grafana | logger=migrator t=2025-06-16T14:56:20.298702465Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" grafana | logger=migrator t=2025-06-16T14:56:20.30666365Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=7.960265ms grafana | logger=migrator t=2025-06-16T14:56:20.309949028Z level=info msg="Executing migration" id="create api_key table v2" grafana | logger=migrator t=2025-06-16T14:56:20.310807329Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=858.261µs grafana | logger=migrator t=2025-06-16T14:56:20.317571369Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" grafana | logger=migrator t=2025-06-16T14:56:20.31850763Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=936.191µs grafana | logger=migrator t=2025-06-16T14:56:20.322385986Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" grafana | logger=migrator t=2025-06-16T14:56:20.324257708Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=1.870512ms grafana | logger=migrator t=2025-06-16T14:56:20.328521519Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" grafana | logger=migrator t=2025-06-16T14:56:20.330077367Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=1.555348ms grafana | logger=migrator t=2025-06-16T14:56:20.336060778Z level=info msg="Executing migration" id="copy api_key v1 to v2" grafana | logger=migrator t=2025-06-16T14:56:20.336529783Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=468.595µs grafana | logger=migrator t=2025-06-16T14:56:20.339731791Z level=info msg="Executing migration" id="Drop old table api_key_v1" grafana | logger=migrator t=2025-06-16T14:56:20.340436499Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=704.558µs grafana | logger=migrator t=2025-06-16T14:56:20.344823942Z level=info msg="Executing migration" id="Update api_key table charset" grafana | logger=migrator t=2025-06-16T14:56:20.344994774Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=170.362µs grafana | logger=migrator t=2025-06-16T14:56:20.350738421Z level=info msg="Executing migration" id="Add expires to api_key table" grafana | logger=migrator t=2025-06-16T14:56:20.353377573Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=2.641322ms grafana | logger=migrator t=2025-06-16T14:56:20.35648763Z level=info msg="Executing migration" id="Add service account foreign key" grafana | logger=migrator t=2025-06-16T14:56:20.35909296Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.60474ms grafana | logger=migrator t=2025-06-16T14:56:20.362305308Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" grafana | logger=migrator t=2025-06-16T14:56:20.362553111Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=245.163µs grafana | logger=migrator t=2025-06-16T14:56:20.368635333Z level=info msg="Executing migration" id="Add last_used_at to api_key table" grafana | logger=migrator t=2025-06-16T14:56:20.372099254Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=3.465811ms grafana | logger=migrator t=2025-06-16T14:56:20.376217593Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" grafana | logger=migrator t=2025-06-16T14:56:20.378931245Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.713172ms grafana | logger=migrator t=2025-06-16T14:56:20.382000572Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" grafana | logger=migrator t=2025-06-16T14:56:20.382799611Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=798.639µs grafana | logger=migrator t=2025-06-16T14:56:20.388475978Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" grafana | logger=migrator t=2025-06-16T14:56:20.389078525Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=601.807µs grafana | logger=migrator t=2025-06-16T14:56:20.392534766Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" grafana | logger=migrator t=2025-06-16T14:56:20.394050654Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=1.515348ms grafana | logger=migrator t=2025-06-16T14:56:20.39791738Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" grafana | logger=migrator t=2025-06-16T14:56:20.399412198Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=1.496258ms grafana | logger=migrator t=2025-06-16T14:56:20.404965794Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" grafana | logger=migrator t=2025-06-16T14:56:20.405789903Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=823.959µs grafana | logger=migrator t=2025-06-16T14:56:20.409534418Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" grafana | logger=migrator t=2025-06-16T14:56:20.410870543Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=1.335115ms grafana | logger=migrator t=2025-06-16T14:56:20.414696869Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" grafana | logger=migrator t=2025-06-16T14:56:20.41477619Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=80.051µs grafana | logger=migrator t=2025-06-16T14:56:20.419529636Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" grafana | logger=migrator t=2025-06-16T14:56:20.419637557Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=107.641µs grafana | logger=migrator t=2025-06-16T14:56:20.423321461Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" grafana | logger=migrator t=2025-06-16T14:56:20.426309396Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=2.987315ms grafana | logger=migrator t=2025-06-16T14:56:20.429885458Z level=info msg="Executing migration" id="Add encrypted dashboard json column" grafana | logger=migrator t=2025-06-16T14:56:20.432856154Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.970476ms grafana | logger=migrator t=2025-06-16T14:56:20.437290806Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" grafana | logger=migrator t=2025-06-16T14:56:20.437330387Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=39.531µs grafana | logger=migrator t=2025-06-16T14:56:20.440686236Z level=info msg="Executing migration" id="create quota table v1" grafana | logger=migrator t=2025-06-16T14:56:20.441401325Z level=info msg="Migration successfully executed" id="create quota table v1" duration=714.529µs grafana | logger=migrator t=2025-06-16T14:56:20.444987197Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" grafana | logger=migrator t=2025-06-16T14:56:20.446411174Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=1.397287ms grafana | logger=migrator t=2025-06-16T14:56:20.450245959Z level=info msg="Executing migration" id="Update quota table charset" grafana | logger=migrator t=2025-06-16T14:56:20.450456122Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=210.263µs grafana | logger=migrator t=2025-06-16T14:56:20.456490654Z level=info msg="Executing migration" id="create plugin_setting table" grafana | logger=migrator t=2025-06-16T14:56:20.457335044Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=844.52µs grafana | logger=migrator t=2025-06-16T14:56:20.461331341Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" grafana | logger=migrator t=2025-06-16T14:56:20.462308872Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=976.971µs grafana | logger=migrator t=2025-06-16T14:56:20.467021938Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" grafana | logger=migrator t=2025-06-16T14:56:20.471069856Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=4.050518ms grafana | logger=migrator t=2025-06-16T14:56:20.474111672Z level=info msg="Executing migration" id="Update plugin_setting table charset" grafana | logger=migrator t=2025-06-16T14:56:20.474147652Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=35.62µs grafana | logger=migrator t=2025-06-16T14:56:20.476844224Z level=info msg="Executing migration" id="update NULL org_id to 1" grafana | logger=migrator t=2025-06-16T14:56:20.477198159Z level=info msg="Migration successfully executed" id="update NULL org_id to 1" duration=353.645µs grafana | logger=migrator t=2025-06-16T14:56:20.480412377Z level=info msg="Executing migration" id="make org_id NOT NULL and DEFAULT VALUE 1" grafana | logger=migrator t=2025-06-16T14:56:20.488192729Z level=info msg="Migration successfully executed" id="make org_id NOT NULL and DEFAULT VALUE 1" duration=7.780042ms grafana | logger=migrator t=2025-06-16T14:56:20.495218962Z level=info msg="Executing migration" id="create session table" grafana | logger=migrator t=2025-06-16T14:56:20.496699Z level=info msg="Migration successfully executed" id="create session table" duration=1.478978ms grafana | logger=migrator t=2025-06-16T14:56:20.500238821Z level=info msg="Executing migration" id="Drop old table playlist table" grafana | logger=migrator t=2025-06-16T14:56:20.500562425Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=322.914µs grafana | logger=migrator t=2025-06-16T14:56:20.504091727Z level=info msg="Executing migration" id="Drop old table playlist_item table" grafana | logger=migrator t=2025-06-16T14:56:20.504181728Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=89.911µs grafana | logger=migrator t=2025-06-16T14:56:20.509751114Z level=info msg="Executing migration" id="create playlist table v2" grafana | logger=migrator t=2025-06-16T14:56:20.510665245Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=913.381µs grafana | logger=migrator t=2025-06-16T14:56:20.514244567Z level=info msg="Executing migration" id="create playlist item table v2" grafana | logger=migrator t=2025-06-16T14:56:20.515473642Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=1.228845ms grafana | logger=migrator t=2025-06-16T14:56:20.51866995Z level=info msg="Executing migration" id="Update playlist table charset" grafana | logger=migrator t=2025-06-16T14:56:20.51870969Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=41.01µs grafana | logger=migrator t=2025-06-16T14:56:20.522017329Z level=info msg="Executing migration" id="Update playlist_item table charset" grafana | logger=migrator t=2025-06-16T14:56:20.5220579Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=42.071µs grafana | logger=migrator t=2025-06-16T14:56:20.52458856Z level=info msg="Executing migration" id="Add playlist column created_at" grafana | logger=migrator t=2025-06-16T14:56:20.529610929Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=5.021139ms grafana | logger=migrator t=2025-06-16T14:56:20.5347034Z level=info msg="Executing migration" id="Add playlist column updated_at" grafana | logger=migrator t=2025-06-16T14:56:20.537843227Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=3.136757ms grafana | logger=migrator t=2025-06-16T14:56:20.54069948Z level=info msg="Executing migration" id="drop preferences table v2" grafana | logger=migrator t=2025-06-16T14:56:20.540780331Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=80.991µs grafana | logger=migrator t=2025-06-16T14:56:20.543677716Z level=info msg="Executing migration" id="drop preferences table v3" grafana | logger=migrator t=2025-06-16T14:56:20.543750057Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=72.451µs grafana | logger=migrator t=2025-06-16T14:56:20.549788388Z level=info msg="Executing migration" id="create preferences table v3" grafana | logger=migrator t=2025-06-16T14:56:20.551053093Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=1.263905ms grafana | logger=migrator t=2025-06-16T14:56:20.554705726Z level=info msg="Executing migration" id="Update preferences table charset" grafana | logger=migrator t=2025-06-16T14:56:20.554739807Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=34.961µs grafana | logger=migrator t=2025-06-16T14:56:20.559650655Z level=info msg="Executing migration" id="Add column team_id in preferences" grafana | logger=migrator t=2025-06-16T14:56:20.563002154Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=3.350669ms grafana | logger=migrator t=2025-06-16T14:56:20.568359438Z level=info msg="Executing migration" id="Update team_id column values in preferences" grafana | logger=migrator t=2025-06-16T14:56:20.56852032Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=160.972µs grafana | logger=migrator t=2025-06-16T14:56:20.571223052Z level=info msg="Executing migration" id="Add column week_start in preferences" grafana | logger=migrator t=2025-06-16T14:56:20.57442434Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=3.200828ms grafana | logger=migrator t=2025-06-16T14:56:20.577189022Z level=info msg="Executing migration" id="Add column preferences.json_data" grafana | logger=migrator t=2025-06-16T14:56:20.58040334Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=3.213818ms grafana | logger=migrator t=2025-06-16T14:56:20.583298435Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" grafana | logger=migrator t=2025-06-16T14:56:20.583313115Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=15.13µs grafana | logger=migrator t=2025-06-16T14:56:20.588056491Z level=info msg="Executing migration" id="Add preferences index org_id" grafana | logger=migrator t=2025-06-16T14:56:20.58883733Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=782.939µs grafana | logger=migrator t=2025-06-16T14:56:20.591842666Z level=info msg="Executing migration" id="Add preferences index user_id" grafana | logger=migrator t=2025-06-16T14:56:20.593218782Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=1.375456ms grafana | logger=migrator t=2025-06-16T14:56:20.596346479Z level=info msg="Executing migration" id="create alert table v1" grafana | logger=migrator t=2025-06-16T14:56:20.597804417Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.457568ms grafana | logger=migrator t=2025-06-16T14:56:20.602863716Z level=info msg="Executing migration" id="add index alert org_id & id " grafana | logger=migrator t=2025-06-16T14:56:20.603683026Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=819.28µs grafana | logger=migrator t=2025-06-16T14:56:20.606422989Z level=info msg="Executing migration" id="add index alert state" grafana | logger=migrator t=2025-06-16T14:56:20.607188008Z level=info msg="Migration successfully executed" id="add index alert state" duration=762.079µs grafana | logger=migrator t=2025-06-16T14:56:20.611953454Z level=info msg="Executing migration" id="add index alert dashboard_id" grafana | logger=migrator t=2025-06-16T14:56:20.612744423Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=789.959µs grafana | logger=migrator t=2025-06-16T14:56:20.619524944Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" grafana | logger=migrator t=2025-06-16T14:56:20.620659147Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=1.133863ms grafana | logger=migrator t=2025-06-16T14:56:20.623962556Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" grafana | logger=migrator t=2025-06-16T14:56:20.625360533Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=1.396997ms grafana | logger=migrator t=2025-06-16T14:56:20.628578511Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" grafana | logger=migrator t=2025-06-16T14:56:20.62933229Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=753.669µs grafana | logger=migrator t=2025-06-16T14:56:20.634155627Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" grafana | logger=migrator t=2025-06-16T14:56:20.643823832Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=9.667565ms grafana | logger=migrator t=2025-06-16T14:56:20.64703794Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" grafana | logger=migrator t=2025-06-16T14:56:20.648049671Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=1.011671ms grafana | logger=migrator t=2025-06-16T14:56:20.651103198Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" grafana | logger=migrator t=2025-06-16T14:56:20.652467124Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=1.363006ms grafana | logger=migrator t=2025-06-16T14:56:20.659324455Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" grafana | logger=migrator t=2025-06-16T14:56:20.659606348Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=281.603µs grafana | logger=migrator t=2025-06-16T14:56:20.662130808Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" grafana | logger=migrator t=2025-06-16T14:56:20.662683465Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=552.087µs grafana | logger=migrator t=2025-06-16T14:56:20.667847036Z level=info msg="Executing migration" id="create alert_notification table v1" grafana | logger=migrator t=2025-06-16T14:56:20.669456215Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=1.611499ms grafana | logger=migrator t=2025-06-16T14:56:20.672987167Z level=info msg="Executing migration" id="Add column is_default" grafana | logger=migrator t=2025-06-16T14:56:20.678895766Z level=info msg="Migration successfully executed" id="Add column is_default" duration=5.908919ms grafana | logger=migrator t=2025-06-16T14:56:20.681884532Z level=info msg="Executing migration" id="Add column frequency" grafana | logger=migrator t=2025-06-16T14:56:20.685530905Z level=info msg="Migration successfully executed" id="Add column frequency" duration=3.646033ms grafana | logger=migrator t=2025-06-16T14:56:20.689998938Z level=info msg="Executing migration" id="Add column send_reminder" grafana | logger=migrator t=2025-06-16T14:56:20.693752642Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.750684ms grafana | logger=migrator t=2025-06-16T14:56:20.698490439Z level=info msg="Executing migration" id="Add column disable_resolve_message" grafana | logger=migrator t=2025-06-16T14:56:20.702143262Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=3.652283ms grafana | logger=migrator t=2025-06-16T14:56:20.705890046Z level=info msg="Executing migration" id="add index alert_notification org_id & name" grafana | logger=migrator t=2025-06-16T14:56:20.706869608Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=979.282µs grafana | logger=migrator t=2025-06-16T14:56:20.711537233Z level=info msg="Executing migration" id="Update alert table charset" grafana | logger=migrator t=2025-06-16T14:56:20.711566963Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=27.83µs grafana | logger=migrator t=2025-06-16T14:56:20.716261049Z level=info msg="Executing migration" id="Update alert_notification table charset" grafana | logger=migrator t=2025-06-16T14:56:20.716299819Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=39.72µs grafana | logger=migrator t=2025-06-16T14:56:20.721328469Z level=info msg="Executing migration" id="create notification_journal table v1" grafana | logger=migrator t=2025-06-16T14:56:20.722743276Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=1.414037ms grafana | logger=migrator t=2025-06-16T14:56:20.727399111Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" grafana | logger=migrator t=2025-06-16T14:56:20.728366322Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=966.681µs grafana | logger=migrator t=2025-06-16T14:56:20.731954575Z level=info msg="Executing migration" id="drop alert_notification_journal" grafana | logger=migrator t=2025-06-16T14:56:20.733136269Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=1.181064ms grafana | logger=migrator t=2025-06-16T14:56:20.737368259Z level=info msg="Executing migration" id="create alert_notification_state table v1" grafana | logger=migrator t=2025-06-16T14:56:20.738676744Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=1.308005ms grafana | logger=migrator t=2025-06-16T14:56:20.744752456Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" grafana | logger=migrator t=2025-06-16T14:56:20.746546288Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=1.795801ms grafana | logger=migrator t=2025-06-16T14:56:20.750522515Z level=info msg="Executing migration" id="Add for to alert table" grafana | logger=migrator t=2025-06-16T14:56:20.75439807Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=3.839885ms grafana | logger=migrator t=2025-06-16T14:56:20.757830871Z level=info msg="Executing migration" id="Add column uid in alert_notification" grafana | logger=migrator t=2025-06-16T14:56:20.76113278Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=3.302439ms grafana | logger=migrator t=2025-06-16T14:56:20.764219837Z level=info msg="Executing migration" id="Update uid column values in alert_notification" grafana | logger=migrator t=2025-06-16T14:56:20.764401949Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=179.482µs grafana | logger=migrator t=2025-06-16T14:56:20.767384004Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" grafana | logger=migrator t=2025-06-16T14:56:20.768177173Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=792.049µs grafana | logger=migrator t=2025-06-16T14:56:20.77547079Z level=info msg="Executing migration" id="Remove unique index org_id_name" grafana | logger=migrator t=2025-06-16T14:56:20.776626614Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=1.154934ms grafana | logger=migrator t=2025-06-16T14:56:20.781522222Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" grafana | logger=migrator t=2025-06-16T14:56:20.787965688Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=6.443506ms grafana | logger=migrator t=2025-06-16T14:56:20.791175846Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" grafana | logger=migrator t=2025-06-16T14:56:20.791196026Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=18.5µs grafana | logger=migrator t=2025-06-16T14:56:20.795882182Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" grafana | logger=migrator t=2025-06-16T14:56:20.796799303Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=916.971µs grafana | logger=migrator t=2025-06-16T14:56:20.800390515Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" grafana | logger=migrator t=2025-06-16T14:56:20.801276856Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=885.861µs grafana | logger=migrator t=2025-06-16T14:56:20.804493694Z level=info msg="Executing migration" id="Drop old annotation table v4" grafana | logger=migrator t=2025-06-16T14:56:20.804610445Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=116.031µs grafana | logger=migrator t=2025-06-16T14:56:20.8109027Z level=info msg="Executing migration" id="create annotation table v5" grafana | logger=migrator t=2025-06-16T14:56:20.811897951Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=994.761µs grafana | logger=migrator t=2025-06-16T14:56:20.815340662Z level=info msg="Executing migration" id="add index annotation 0 v3" grafana | logger=migrator t=2025-06-16T14:56:20.816264523Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=923.661µs grafana | logger=migrator t=2025-06-16T14:56:20.819628533Z level=info msg="Executing migration" id="add index annotation 1 v3" grafana | logger=migrator t=2025-06-16T14:56:20.820588814Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=959.961µs grafana | logger=migrator t=2025-06-16T14:56:20.825462142Z level=info msg="Executing migration" id="add index annotation 2 v3" grafana | logger=migrator t=2025-06-16T14:56:20.826441853Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=979.211µs grafana | logger=migrator t=2025-06-16T14:56:20.831058928Z level=info msg="Executing migration" id="add index annotation 3 v3" grafana | logger=migrator t=2025-06-16T14:56:20.833200223Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=2.140705ms grafana | logger=migrator t=2025-06-16T14:56:20.837002049Z level=info msg="Executing migration" id="add index annotation 4 v3" grafana | logger=migrator t=2025-06-16T14:56:20.838676088Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.672689ms grafana | logger=migrator t=2025-06-16T14:56:20.843674907Z level=info msg="Executing migration" id="Update annotation table charset" grafana | logger=migrator t=2025-06-16T14:56:20.843699528Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=24.811µs grafana | logger=migrator t=2025-06-16T14:56:20.847036697Z level=info msg="Executing migration" id="Add column region_id to annotation table" grafana | logger=migrator t=2025-06-16T14:56:20.855024812Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=7.989285ms grafana | logger=migrator t=2025-06-16T14:56:20.85826087Z level=info msg="Executing migration" id="Drop category_id index" grafana | logger=migrator t=2025-06-16T14:56:20.859427504Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=1.192054ms grafana | logger=migrator t=2025-06-16T14:56:20.863098668Z level=info msg="Executing migration" id="Add column tags to annotation table" grafana | logger=migrator t=2025-06-16T14:56:20.867381748Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=4.282211ms grafana | logger=migrator t=2025-06-16T14:56:20.870986481Z level=info msg="Executing migration" id="Create annotation_tag table v2" grafana | logger=migrator t=2025-06-16T14:56:20.871697939Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=710.888µs grafana | logger=migrator t=2025-06-16T14:56:20.876172872Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" grafana | logger=migrator t=2025-06-16T14:56:20.877228395Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=1.054923ms grafana | logger=migrator t=2025-06-16T14:56:20.881777959Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" grafana | logger=migrator t=2025-06-16T14:56:20.882682959Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=905.16µs grafana | logger=migrator t=2025-06-16T14:56:20.887390995Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" grafana | logger=migrator t=2025-06-16T14:56:20.899770242Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=12.379997ms grafana | logger=migrator t=2025-06-16T14:56:20.903395045Z level=info msg="Executing migration" id="Create annotation_tag table v3" grafana | logger=migrator t=2025-06-16T14:56:20.903941961Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=543.247µs grafana | logger=migrator t=2025-06-16T14:56:20.907550644Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" grafana | logger=migrator t=2025-06-16T14:56:20.908519935Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=966.381µs grafana | logger=migrator t=2025-06-16T14:56:20.914694988Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" grafana | logger=migrator t=2025-06-16T14:56:20.915266165Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=570.227µs grafana | logger=migrator t=2025-06-16T14:56:20.918945219Z level=info msg="Executing migration" id="drop table annotation_tag_v2" grafana | logger=migrator t=2025-06-16T14:56:20.91991232Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=966.052µs grafana | logger=migrator t=2025-06-16T14:56:20.923654874Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" grafana | logger=migrator t=2025-06-16T14:56:20.923925218Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=269.654µs grafana | logger=migrator t=2025-06-16T14:56:20.927224457Z level=info msg="Executing migration" id="Add created time to annotation table" grafana | logger=migrator t=2025-06-16T14:56:20.931525377Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=4.29993ms grafana | logger=migrator t=2025-06-16T14:56:20.936068772Z level=info msg="Executing migration" id="Add updated time to annotation table" grafana | logger=migrator t=2025-06-16T14:56:20.940281731Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=4.21304ms grafana | logger=migrator t=2025-06-16T14:56:20.943863754Z level=info msg="Executing migration" id="Add index for created in annotation table" grafana | logger=migrator t=2025-06-16T14:56:20.944839945Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=976.081µs grafana | logger=migrator t=2025-06-16T14:56:20.948299666Z level=info msg="Executing migration" id="Add index for updated in annotation table" grafana | logger=migrator t=2025-06-16T14:56:20.949295958Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=995.732µs grafana | logger=migrator t=2025-06-16T14:56:20.953777611Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" grafana | logger=migrator t=2025-06-16T14:56:20.954091525Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=313.264µs grafana | logger=migrator t=2025-06-16T14:56:20.957491285Z level=info msg="Executing migration" id="Add epoch_end column" grafana | logger=migrator t=2025-06-16T14:56:20.961909777Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=4.417442ms grafana | logger=migrator t=2025-06-16T14:56:20.966317969Z level=info msg="Executing migration" id="Add index for epoch_end" grafana | logger=migrator t=2025-06-16T14:56:20.967342372Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=1.024463ms grafana | logger=migrator t=2025-06-16T14:56:20.971819545Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" grafana | logger=migrator t=2025-06-16T14:56:20.972186689Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=365.864µs grafana | logger=migrator t=2025-06-16T14:56:20.976195087Z level=info msg="Executing migration" id="Move region to single row" grafana | logger=migrator t=2025-06-16T14:56:20.977015006Z level=info msg="Migration successfully executed" id="Move region to single row" duration=819.54µs grafana | logger=migrator t=2025-06-16T14:56:20.980970573Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" grafana | logger=migrator t=2025-06-16T14:56:20.981832673Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=861.82µs grafana | logger=migrator t=2025-06-16T14:56:20.986570169Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" grafana | logger=migrator t=2025-06-16T14:56:20.987541281Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=970.522µs grafana | logger=migrator t=2025-06-16T14:56:20.994864377Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" grafana | logger=migrator t=2025-06-16T14:56:20.995810528Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=945.971µs grafana | logger=migrator t=2025-06-16T14:56:20.999528423Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" grafana | logger=migrator t=2025-06-16T14:56:21.001144132Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=1.614939ms grafana | logger=migrator t=2025-06-16T14:56:21.005959349Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" grafana | logger=migrator t=2025-06-16T14:56:21.007292304Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=1.333955ms grafana | logger=migrator t=2025-06-16T14:56:21.010795446Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" grafana | logger=migrator t=2025-06-16T14:56:21.011782497Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=986.801µs grafana | logger=migrator t=2025-06-16T14:56:21.015269199Z level=info msg="Executing migration" id="Increase tags column to length 4096" grafana | logger=migrator t=2025-06-16T14:56:21.01537055Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=101.831µs grafana | logger=migrator t=2025-06-16T14:56:21.020935636Z level=info msg="Executing migration" id="Increase prev_state column to length 40 not null" grafana | logger=migrator t=2025-06-16T14:56:21.021037657Z level=info msg="Migration successfully executed" id="Increase prev_state column to length 40 not null" duration=102.661µs grafana | logger=migrator t=2025-06-16T14:56:21.02547537Z level=info msg="Executing migration" id="Increase new_state column to length 40 not null" grafana | logger=migrator t=2025-06-16T14:56:21.025624831Z level=info msg="Migration successfully executed" id="Increase new_state column to length 40 not null" duration=150.191µs grafana | logger=migrator t=2025-06-16T14:56:21.029521817Z level=info msg="Executing migration" id="create test_data table" grafana | logger=migrator t=2025-06-16T14:56:21.030933904Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.411237ms grafana | logger=migrator t=2025-06-16T14:56:21.03482191Z level=info msg="Executing migration" id="create dashboard_version table v1" grafana | logger=migrator t=2025-06-16T14:56:21.035716301Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=893.541µs grafana | logger=migrator t=2025-06-16T14:56:21.040146663Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" grafana | logger=migrator t=2025-06-16T14:56:21.041163825Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=1.016532ms grafana | logger=migrator t=2025-06-16T14:56:21.046544209Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" grafana | logger=migrator t=2025-06-16T14:56:21.047547481Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=1.002751ms grafana | logger=migrator t=2025-06-16T14:56:21.051315965Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" grafana | logger=migrator t=2025-06-16T14:56:21.05174312Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=427.755µs grafana | logger=migrator t=2025-06-16T14:56:21.056564597Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" grafana | logger=migrator t=2025-06-16T14:56:21.057258986Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=693.419µs grafana | logger=migrator t=2025-06-16T14:56:21.06099024Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" grafana | logger=migrator t=2025-06-16T14:56:21.06104448Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=57.06µs grafana | logger=migrator t=2025-06-16T14:56:21.064640993Z level=info msg="Executing migration" id="Add apiVersion for dashboard_version" grafana | logger=migrator t=2025-06-16T14:56:21.070835726Z level=info msg="Migration successfully executed" id="Add apiVersion for dashboard_version" duration=6.193023ms grafana | logger=migrator t=2025-06-16T14:56:21.075115867Z level=info msg="Executing migration" id="create team table" grafana | logger=migrator t=2025-06-16T14:56:21.075911766Z level=info msg="Migration successfully executed" id="create team table" duration=795.879µs grafana | logger=migrator t=2025-06-16T14:56:21.08043907Z level=info msg="Executing migration" id="add index team.org_id" grafana | logger=migrator t=2025-06-16T14:56:21.08134298Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=903.92µs grafana | logger=migrator t=2025-06-16T14:56:21.084860852Z level=info msg="Executing migration" id="add unique index team_org_id_name" grafana | logger=migrator t=2025-06-16T14:56:21.086424861Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.563358ms grafana | logger=migrator t=2025-06-16T14:56:21.091283458Z level=info msg="Executing migration" id="Add column uid in team" grafana | logger=migrator t=2025-06-16T14:56:21.09736311Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=6.079742ms grafana | logger=migrator t=2025-06-16T14:56:21.102515661Z level=info msg="Executing migration" id="Update uid column values in team" grafana | logger=migrator t=2025-06-16T14:56:21.102807284Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=290.793µs grafana | logger=migrator t=2025-06-16T14:56:21.106589269Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" grafana | logger=migrator t=2025-06-16T14:56:21.107758333Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.168714ms grafana | logger=migrator t=2025-06-16T14:56:21.111486227Z level=info msg="Executing migration" id="Add column external_uid in team" grafana | logger=migrator t=2025-06-16T14:56:21.117737571Z level=info msg="Migration successfully executed" id="Add column external_uid in team" duration=6.250564ms grafana | logger=migrator t=2025-06-16T14:56:21.122281354Z level=info msg="Executing migration" id="Add column is_provisioned in team" grafana | logger=migrator t=2025-06-16T14:56:21.125867207Z level=info msg="Migration successfully executed" id="Add column is_provisioned in team" duration=3.585733ms grafana | logger=migrator t=2025-06-16T14:56:21.130263699Z level=info msg="Executing migration" id="create team member table" grafana | logger=migrator t=2025-06-16T14:56:21.131140389Z level=info msg="Migration successfully executed" id="create team member table" duration=875.69µs grafana | logger=migrator t=2025-06-16T14:56:21.13456012Z level=info msg="Executing migration" id="add index team_member.org_id" grafana | logger=migrator t=2025-06-16T14:56:21.135637912Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.076512ms grafana | logger=migrator t=2025-06-16T14:56:21.140248487Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" grafana | logger=migrator t=2025-06-16T14:56:21.141259119Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=1.010932ms grafana | logger=migrator t=2025-06-16T14:56:21.14471074Z level=info msg="Executing migration" id="add index team_member.team_id" grafana | logger=migrator t=2025-06-16T14:56:21.145883444Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.171554ms grafana | logger=migrator t=2025-06-16T14:56:21.151719193Z level=info msg="Executing migration" id="Add column email to team table" grafana | logger=migrator t=2025-06-16T14:56:21.157171897Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=5.452174ms grafana | logger=migrator t=2025-06-16T14:56:21.162174536Z level=info msg="Executing migration" id="Add column external to team_member table" grafana | logger=migrator t=2025-06-16T14:56:21.167027974Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=4.852758ms grafana | logger=migrator t=2025-06-16T14:56:21.170451704Z level=info msg="Executing migration" id="Add column permission to team_member table" grafana | logger=migrator t=2025-06-16T14:56:21.175236621Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=4.784417ms grafana | logger=migrator t=2025-06-16T14:56:21.178710942Z level=info msg="Executing migration" id="add unique index team_member_user_id_org_id" grafana | logger=migrator t=2025-06-16T14:56:21.179723844Z level=info msg="Migration successfully executed" id="add unique index team_member_user_id_org_id" duration=1.011962ms grafana | logger=migrator t=2025-06-16T14:56:21.185195999Z level=info msg="Executing migration" id="create dashboard acl table" grafana | logger=migrator t=2025-06-16T14:56:21.186069059Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=873.08µs grafana | logger=migrator t=2025-06-16T14:56:21.189624321Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" grafana | logger=migrator t=2025-06-16T14:56:21.190633143Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.008142ms grafana | logger=migrator t=2025-06-16T14:56:21.194021113Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" grafana | logger=migrator t=2025-06-16T14:56:21.195024865Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.002182ms grafana | logger=migrator t=2025-06-16T14:56:21.199079933Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" grafana | logger=migrator t=2025-06-16T14:56:21.200102145Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=1.021192ms grafana | logger=migrator t=2025-06-16T14:56:21.20389385Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" grafana | logger=migrator t=2025-06-16T14:56:21.204928692Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=1.034572ms grafana | logger=migrator t=2025-06-16T14:56:21.208740957Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" grafana | logger=migrator t=2025-06-16T14:56:21.210279215Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.509258ms grafana | logger=migrator t=2025-06-16T14:56:21.21489404Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" grafana | logger=migrator t=2025-06-16T14:56:21.216167785Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=1.274175ms grafana | logger=migrator t=2025-06-16T14:56:21.219738237Z level=info msg="Executing migration" id="add index dashboard_permission" grafana | logger=migrator t=2025-06-16T14:56:21.220878141Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.139454ms grafana | logger=migrator t=2025-06-16T14:56:21.225980951Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" grafana | logger=migrator t=2025-06-16T14:56:21.226462337Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=481.186µs grafana | logger=migrator t=2025-06-16T14:56:21.229981568Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" grafana | logger=migrator t=2025-06-16T14:56:21.230216441Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=234.783µs grafana | logger=migrator t=2025-06-16T14:56:21.234461521Z level=info msg="Executing migration" id="create tag table" grafana | logger=migrator t=2025-06-16T14:56:21.237104542Z level=info msg="Migration successfully executed" id="create tag table" duration=2.643851ms grafana | logger=migrator t=2025-06-16T14:56:21.243498778Z level=info msg="Executing migration" id="add index tag.key_value" grafana | logger=migrator t=2025-06-16T14:56:21.245071427Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.571709ms grafana | logger=migrator t=2025-06-16T14:56:21.248967823Z level=info msg="Executing migration" id="create login attempt table" grafana | logger=migrator t=2025-06-16T14:56:21.250258418Z level=info msg="Migration successfully executed" id="create login attempt table" duration=1.289435ms grafana | logger=migrator t=2025-06-16T14:56:21.25382766Z level=info msg="Executing migration" id="add index login_attempt.username" grafana | logger=migrator t=2025-06-16T14:56:21.254841612Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=1.013442ms grafana | logger=migrator t=2025-06-16T14:56:21.259481757Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" grafana | logger=migrator t=2025-06-16T14:56:21.260749052Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=1.265815ms grafana | logger=migrator t=2025-06-16T14:56:21.264591648Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-16T14:56:21.281306635Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=16.713057ms grafana | logger=migrator t=2025-06-16T14:56:21.284665315Z level=info msg="Executing migration" id="create login_attempt v2" grafana | logger=migrator t=2025-06-16T14:56:21.285233652Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=568.077µs grafana | logger=migrator t=2025-06-16T14:56:21.289918807Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" grafana | logger=migrator t=2025-06-16T14:56:21.290616775Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=697.718µs grafana | logger=migrator t=2025-06-16T14:56:21.2943504Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" grafana | logger=migrator t=2025-06-16T14:56:21.294940357Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=585.477µs grafana | logger=migrator t=2025-06-16T14:56:21.29946799Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" grafana | logger=migrator t=2025-06-16T14:56:21.300108008Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=639.078µs grafana | logger=migrator t=2025-06-16T14:56:21.305437611Z level=info msg="Executing migration" id="create user auth table" grafana | logger=migrator t=2025-06-16T14:56:21.3062475Z level=info msg="Migration successfully executed" id="create user auth table" duration=809.229µs grafana | logger=migrator t=2025-06-16T14:56:21.311036057Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" grafana | logger=migrator t=2025-06-16T14:56:21.312000618Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=964.231µs grafana | logger=migrator t=2025-06-16T14:56:21.315675032Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" grafana | logger=migrator t=2025-06-16T14:56:21.315800023Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=86.411µs grafana | logger=migrator t=2025-06-16T14:56:21.321651713Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" grafana | logger=migrator t=2025-06-16T14:56:21.32908406Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=7.432688ms grafana | logger=migrator t=2025-06-16T14:56:21.332514361Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" grafana | logger=migrator t=2025-06-16T14:56:21.337689662Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=5.174531ms grafana | logger=migrator t=2025-06-16T14:56:21.341294355Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" grafana | logger=migrator t=2025-06-16T14:56:21.346614308Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=5.318973ms grafana | logger=migrator t=2025-06-16T14:56:21.351097781Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" grafana | logger=migrator t=2025-06-16T14:56:21.356275192Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=5.176631ms grafana | logger=migrator t=2025-06-16T14:56:21.360803106Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" grafana | logger=migrator t=2025-06-16T14:56:21.361805657Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=1.001732ms grafana | logger=migrator t=2025-06-16T14:56:21.365320099Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" grafana | logger=migrator t=2025-06-16T14:56:21.370567621Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=5.246922ms grafana | logger=migrator t=2025-06-16T14:56:21.373982802Z level=info msg="Executing migration" id="Add user_unique_id to user_auth" grafana | logger=migrator t=2025-06-16T14:56:21.379301004Z level=info msg="Migration successfully executed" id="Add user_unique_id to user_auth" duration=5.317422ms grafana | logger=migrator t=2025-06-16T14:56:21.385037132Z level=info msg="Executing migration" id="create server_lock table" grafana | logger=migrator t=2025-06-16T14:56:21.385809641Z level=info msg="Migration successfully executed" id="create server_lock table" duration=772.019µs grafana | logger=migrator t=2025-06-16T14:56:21.389479925Z level=info msg="Executing migration" id="add index server_lock.operation_uid" grafana | logger=migrator t=2025-06-16T14:56:21.391517969Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=2.035544ms grafana | logger=migrator t=2025-06-16T14:56:21.395109611Z level=info msg="Executing migration" id="create user auth token table" grafana | logger=migrator t=2025-06-16T14:56:21.396766691Z level=info msg="Migration successfully executed" id="create user auth token table" duration=1.65613ms grafana | logger=migrator t=2025-06-16T14:56:21.400310713Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" grafana | logger=migrator t=2025-06-16T14:56:21.401385415Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.077542ms grafana | logger=migrator t=2025-06-16T14:56:21.40599118Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" grafana | logger=migrator t=2025-06-16T14:56:21.407297536Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.305145ms grafana | logger=migrator t=2025-06-16T14:56:21.410918398Z level=info msg="Executing migration" id="add index user_auth_token.user_id" grafana | logger=migrator t=2025-06-16T14:56:21.413136545Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=2.217247ms grafana | logger=migrator t=2025-06-16T14:56:21.416429853Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" grafana | logger=migrator t=2025-06-16T14:56:21.425262048Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=8.831285ms grafana | logger=migrator t=2025-06-16T14:56:21.43050969Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" grafana | logger=migrator t=2025-06-16T14:56:21.431162728Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=652.298µs grafana | logger=migrator t=2025-06-16T14:56:21.436239938Z level=info msg="Executing migration" id="add external_session_id to user_auth_token" grafana | logger=migrator t=2025-06-16T14:56:21.445474967Z level=info msg="Migration successfully executed" id="add external_session_id to user_auth_token" duration=9.237309ms grafana | logger=migrator t=2025-06-16T14:56:21.448538733Z level=info msg="Executing migration" id="create cache_data table" grafana | logger=migrator t=2025-06-16T14:56:21.449357043Z level=info msg="Migration successfully executed" id="create cache_data table" duration=817.9µs grafana | logger=migrator t=2025-06-16T14:56:21.454610775Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" grafana | logger=migrator t=2025-06-16T14:56:21.45584782Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=1.235995ms grafana | logger=migrator t=2025-06-16T14:56:21.459201929Z level=info msg="Executing migration" id="create short_url table v1" grafana | logger=migrator t=2025-06-16T14:56:21.460443064Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=1.244495ms grafana | logger=migrator t=2025-06-16T14:56:21.463695643Z level=info msg="Executing migration" id="add index short_url.org_id-uid" grafana | logger=migrator t=2025-06-16T14:56:21.464644364Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=948.491µs grafana | logger=migrator t=2025-06-16T14:56:21.469672083Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" grafana | logger=migrator t=2025-06-16T14:56:21.469689353Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=17.64µs grafana | logger=migrator t=2025-06-16T14:56:21.474345538Z level=info msg="Executing migration" id="delete alert_definition table" grafana | logger=migrator t=2025-06-16T14:56:21.47448381Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=135.702µs grafana | logger=migrator t=2025-06-16T14:56:21.477682158Z level=info msg="Executing migration" id="recreate alert_definition table" grafana | logger=migrator t=2025-06-16T14:56:21.479115475Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.431767ms grafana | logger=migrator t=2025-06-16T14:56:21.482303893Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" grafana | logger=migrator t=2025-06-16T14:56:21.483220253Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=916.191µs grafana | logger=migrator t=2025-06-16T14:56:21.490604901Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2025-06-16T14:56:21.491915326Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.313445ms grafana | logger=migrator t=2025-06-16T14:56:21.49816352Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-16T14:56:21.49818815Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=25.57µs grafana | logger=migrator t=2025-06-16T14:56:21.50155761Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" grafana | logger=migrator t=2025-06-16T14:56:21.502808275Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.250595ms grafana | logger=migrator t=2025-06-16T14:56:21.506274916Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2025-06-16T14:56:21.507382499Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.110363ms grafana | logger=migrator t=2025-06-16T14:56:21.512095655Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" grafana | logger=migrator t=2025-06-16T14:56:21.512851914Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=755.719µs grafana | logger=migrator t=2025-06-16T14:56:21.517373567Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2025-06-16T14:56:21.518122966Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=749.179µs grafana | logger=migrator t=2025-06-16T14:56:21.521717349Z level=info msg="Executing migration" id="Add column paused in alert_definition" grafana | logger=migrator t=2025-06-16T14:56:21.531371683Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=9.656305ms grafana | logger=migrator t=2025-06-16T14:56:21.535626683Z level=info msg="Executing migration" id="drop alert_definition table" grafana | logger=migrator t=2025-06-16T14:56:21.536315181Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=688.028µs grafana | logger=migrator t=2025-06-16T14:56:21.541367451Z level=info msg="Executing migration" id="delete alert_definition_version table" grafana | logger=migrator t=2025-06-16T14:56:21.541777216Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=410.205µs grafana | logger=migrator t=2025-06-16T14:56:21.546489352Z level=info msg="Executing migration" id="recreate alert_definition_version table" grafana | logger=migrator t=2025-06-16T14:56:21.547419123Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=929.331µs grafana | logger=migrator t=2025-06-16T14:56:21.551504721Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" grafana | logger=migrator t=2025-06-16T14:56:21.552518753Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.013852ms grafana | logger=migrator t=2025-06-16T14:56:21.556036734Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" grafana | logger=migrator t=2025-06-16T14:56:21.557882846Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.845122ms grafana | logger=migrator t=2025-06-16T14:56:21.562747854Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-16T14:56:21.562912006Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=132.572µs grafana | logger=migrator t=2025-06-16T14:56:21.567140216Z level=info msg="Executing migration" id="drop alert_definition_version table" grafana | logger=migrator t=2025-06-16T14:56:21.568182788Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.042082ms grafana | logger=migrator t=2025-06-16T14:56:21.571464287Z level=info msg="Executing migration" id="create alert_instance table" grafana | logger=migrator t=2025-06-16T14:56:21.57253941Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.074483ms grafana | logger=migrator t=2025-06-16T14:56:21.576916571Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" grafana | logger=migrator t=2025-06-16T14:56:21.578276058Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.358446ms grafana | logger=migrator t=2025-06-16T14:56:21.581746189Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" grafana | logger=migrator t=2025-06-16T14:56:21.582836522Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.089693ms grafana | logger=migrator t=2025-06-16T14:56:21.586363763Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" grafana | logger=migrator t=2025-06-16T14:56:21.592549536Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=6.182803ms grafana | logger=migrator t=2025-06-16T14:56:21.59872671Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" grafana | logger=migrator t=2025-06-16T14:56:21.599740842Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=1.013712ms grafana | logger=migrator t=2025-06-16T14:56:21.603341744Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" grafana | logger=migrator t=2025-06-16T14:56:21.604589049Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=1.276865ms grafana | logger=migrator t=2025-06-16T14:56:21.608135011Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" grafana | logger=migrator t=2025-06-16T14:56:21.633819784Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=25.684443ms grafana | logger=migrator t=2025-06-16T14:56:21.63938069Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" grafana | logger=migrator t=2025-06-16T14:56:21.660384449Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=21.003269ms grafana | logger=migrator t=2025-06-16T14:56:21.66389732Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" grafana | logger=migrator t=2025-06-16T14:56:21.66474052Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=842.62µs grafana | logger=migrator t=2025-06-16T14:56:21.668465434Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" grafana | logger=migrator t=2025-06-16T14:56:21.669213383Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=747.559µs grafana | logger=migrator t=2025-06-16T14:56:21.674783849Z level=info msg="Executing migration" id="add current_reason column related to current_state" grafana | logger=migrator t=2025-06-16T14:56:21.680524487Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=5.766758ms grafana | logger=migrator t=2025-06-16T14:56:21.68414516Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" grafana | logger=migrator t=2025-06-16T14:56:21.690145551Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=5.999401ms grafana | logger=migrator t=2025-06-16T14:56:21.693790824Z level=info msg="Executing migration" id="create alert_rule table" grafana | logger=migrator t=2025-06-16T14:56:21.694829776Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.038262ms grafana | logger=migrator t=2025-06-16T14:56:21.70112628Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" grafana | logger=migrator t=2025-06-16T14:56:21.702204363Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.077313ms grafana | logger=migrator t=2025-06-16T14:56:21.7061585Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" grafana | logger=migrator t=2025-06-16T14:56:21.707162242Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.002982ms grafana | logger=migrator t=2025-06-16T14:56:21.711653485Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" grafana | logger=migrator t=2025-06-16T14:56:21.712696547Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.042592ms grafana | logger=migrator t=2025-06-16T14:56:21.716610834Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-16T14:56:21.716684775Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=74.521µs grafana | logger=migrator t=2025-06-16T14:56:21.720170896Z level=info msg="Executing migration" id="add column for to alert_rule" grafana | logger=migrator t=2025-06-16T14:56:21.726653122Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=6.481436ms grafana | logger=migrator t=2025-06-16T14:56:21.730042523Z level=info msg="Executing migration" id="add column annotations to alert_rule" grafana | logger=migrator t=2025-06-16T14:56:21.734397274Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=4.347501ms grafana | logger=migrator t=2025-06-16T14:56:21.739003168Z level=info msg="Executing migration" id="add column labels to alert_rule" grafana | logger=migrator t=2025-06-16T14:56:21.745293563Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=6.287035ms grafana | logger=migrator t=2025-06-16T14:56:21.749151999Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" grafana | logger=migrator t=2025-06-16T14:56:21.75011167Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=959.361µs grafana | logger=migrator t=2025-06-16T14:56:21.753672132Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" grafana | logger=migrator t=2025-06-16T14:56:21.754821876Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.149494ms grafana | logger=migrator t=2025-06-16T14:56:21.760082388Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" grafana | logger=migrator t=2025-06-16T14:56:21.767206542Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=7.124414ms grafana | logger=migrator t=2025-06-16T14:56:21.7712552Z level=info msg="Executing migration" id="add panel_id column to alert_rule" grafana | logger=migrator t=2025-06-16T14:56:21.777261691Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=6.007751ms grafana | logger=migrator t=2025-06-16T14:56:21.782206719Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" grafana | logger=migrator t=2025-06-16T14:56:21.783287182Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=1.079933ms grafana | logger=migrator t=2025-06-16T14:56:21.787145518Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" grafana | logger=migrator t=2025-06-16T14:56:21.794270692Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=7.123774ms grafana | logger=migrator t=2025-06-16T14:56:21.800083081Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" grafana | logger=migrator t=2025-06-16T14:56:21.806255014Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=6.171463ms grafana | logger=migrator t=2025-06-16T14:56:21.810726507Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" grafana | logger=migrator t=2025-06-16T14:56:21.810778647Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=57.93µs grafana | logger=migrator t=2025-06-16T14:56:21.814310049Z level=info msg="Executing migration" id="create alert_rule_version table" grafana | logger=migrator t=2025-06-16T14:56:21.815599734Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.289035ms grafana | logger=migrator t=2025-06-16T14:56:21.820900357Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" grafana | logger=migrator t=2025-06-16T14:56:21.822619417Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.71836ms grafana | logger=migrator t=2025-06-16T14:56:21.826677315Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" grafana | logger=migrator t=2025-06-16T14:56:21.827737818Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.060113ms grafana | logger=migrator t=2025-06-16T14:56:21.832783898Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-16T14:56:21.832837658Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=54.23µs grafana | logger=migrator t=2025-06-16T14:56:21.837319241Z level=info msg="Executing migration" id="add column for to alert_rule_version" grafana | logger=migrator t=2025-06-16T14:56:21.846051195Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=8.732414ms grafana | logger=migrator t=2025-06-16T14:56:21.849806699Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" grafana | logger=migrator t=2025-06-16T14:56:21.856047933Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=6.241134ms grafana | logger=migrator t=2025-06-16T14:56:21.860644177Z level=info msg="Executing migration" id="add column labels to alert_rule_version" grafana | logger=migrator t=2025-06-16T14:56:21.866301084Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=5.656007ms grafana | logger=migrator t=2025-06-16T14:56:21.8702096Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" grafana | logger=migrator t=2025-06-16T14:56:21.877512897Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=7.302616ms grafana | logger=migrator t=2025-06-16T14:56:21.88116695Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" grafana | logger=migrator t=2025-06-16T14:56:21.887487725Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=6.320145ms grafana | logger=migrator t=2025-06-16T14:56:21.891620333Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" grafana | logger=migrator t=2025-06-16T14:56:21.891672974Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=53.101µs grafana | logger=migrator t=2025-06-16T14:56:21.896729474Z level=info msg="Executing migration" id=create_alert_configuration_table grafana | logger=migrator t=2025-06-16T14:56:21.897801486Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=1.071622ms grafana | logger=migrator t=2025-06-16T14:56:21.903031208Z level=info msg="Executing migration" id="Add column default in alert_configuration" grafana | logger=migrator t=2025-06-16T14:56:21.912177266Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=9.147088ms grafana | logger=migrator t=2025-06-16T14:56:21.915471016Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" grafana | logger=migrator t=2025-06-16T14:56:21.915514056Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=43.86µs grafana | logger=migrator t=2025-06-16T14:56:21.919501823Z level=info msg="Executing migration" id="add column org_id in alert_configuration" grafana | logger=migrator t=2025-06-16T14:56:21.926089221Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=6.587288ms grafana | logger=migrator t=2025-06-16T14:56:21.929480531Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" grafana | logger=migrator t=2025-06-16T14:56:21.930412922Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=931.841µs grafana | logger=migrator t=2025-06-16T14:56:21.933547409Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" grafana | logger=migrator t=2025-06-16T14:56:21.938325466Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=4.777977ms grafana | logger=migrator t=2025-06-16T14:56:21.943376006Z level=info msg="Executing migration" id=create_ngalert_configuration_table grafana | logger=migrator t=2025-06-16T14:56:21.944385437Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=1.008871ms grafana | logger=migrator t=2025-06-16T14:56:21.947635696Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" grafana | logger=migrator t=2025-06-16T14:56:21.948775079Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.139083ms grafana | logger=migrator t=2025-06-16T14:56:21.952164349Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" grafana | logger=migrator t=2025-06-16T14:56:21.958907139Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=6.74226ms grafana | logger=migrator t=2025-06-16T14:56:21.963384302Z level=info msg="Executing migration" id="create provenance_type table" grafana | logger=migrator t=2025-06-16T14:56:21.964531516Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=1.154114ms grafana | logger=migrator t=2025-06-16T14:56:21.968224879Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" grafana | logger=migrator t=2025-06-16T14:56:21.969385923Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.160444ms grafana | logger=migrator t=2025-06-16T14:56:21.972738273Z level=info msg="Executing migration" id="create alert_image table" grafana | logger=migrator t=2025-06-16T14:56:21.973640664Z level=info msg="Migration successfully executed" id="create alert_image table" duration=902.101µs grafana | logger=migrator t=2025-06-16T14:56:21.977817913Z level=info msg="Executing migration" id="add unique index on token to alert_image table" grafana | logger=migrator t=2025-06-16T14:56:21.978936686Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.117893ms grafana | logger=migrator t=2025-06-16T14:56:21.982370107Z level=info msg="Executing migration" id="support longer URLs in alert_image table" grafana | logger=migrator t=2025-06-16T14:56:21.982425217Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=55.16µs grafana | logger=migrator t=2025-06-16T14:56:21.986452505Z level=info msg="Executing migration" id=create_alert_configuration_history_table grafana | logger=migrator t=2025-06-16T14:56:21.987526748Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=1.073653ms grafana | logger=migrator t=2025-06-16T14:56:21.992259284Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" grafana | logger=migrator t=2025-06-16T14:56:21.994537871Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=2.278107ms grafana | logger=migrator t=2025-06-16T14:56:21.998505928Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" grafana | logger=migrator t=2025-06-16T14:56:21.999334867Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" grafana | logger=migrator t=2025-06-16T14:56:22.002994241Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" grafana | logger=migrator t=2025-06-16T14:56:22.00379967Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=805.119µs grafana | logger=migrator t=2025-06-16T14:56:22.008136371Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" grafana | logger=migrator t=2025-06-16T14:56:22.009478057Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.339966ms grafana | logger=migrator t=2025-06-16T14:56:22.014497037Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" grafana | logger=migrator t=2025-06-16T14:56:22.023289171Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=8.792044ms grafana | logger=migrator t=2025-06-16T14:56:22.026828442Z level=info msg="Executing migration" id="create library_element table v1" grafana | logger=migrator t=2025-06-16T14:56:22.028223859Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=1.393047ms grafana | logger=migrator t=2025-06-16T14:56:22.032716862Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" grafana | logger=migrator t=2025-06-16T14:56:22.033935996Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.224154ms grafana | logger=migrator t=2025-06-16T14:56:22.038656772Z level=info msg="Executing migration" id="create library_element_connection table v1" grafana | logger=migrator t=2025-06-16T14:56:22.039600943Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=943.701µs grafana | logger=migrator t=2025-06-16T14:56:22.043134115Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" grafana | logger=migrator t=2025-06-16T14:56:22.044272478Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.136823ms grafana | logger=migrator t=2025-06-16T14:56:22.048775901Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" grafana | logger=migrator t=2025-06-16T14:56:22.049903875Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.127714ms grafana | logger=migrator t=2025-06-16T14:56:22.053302335Z level=info msg="Executing migration" id="increase max description length to 2048" grafana | logger=migrator t=2025-06-16T14:56:22.053436237Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=133.642µs grafana | logger=migrator t=2025-06-16T14:56:22.059267656Z level=info msg="Executing migration" id="alter library_element model to mediumtext" grafana | logger=migrator t=2025-06-16T14:56:22.059382707Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=86.351µs grafana | logger=migrator t=2025-06-16T14:56:22.064037142Z level=info msg="Executing migration" id="add library_element folder uid" grafana | logger=migrator t=2025-06-16T14:56:22.071385679Z level=info msg="Migration successfully executed" id="add library_element folder uid" duration=7.348837ms grafana | logger=migrator t=2025-06-16T14:56:22.075921532Z level=info msg="Executing migration" id="populate library_element folder_uid" grafana | logger=migrator t=2025-06-16T14:56:22.076392578Z level=info msg="Migration successfully executed" id="populate library_element folder_uid" duration=467.186µs grafana | logger=migrator t=2025-06-16T14:56:22.079744497Z level=info msg="Executing migration" id="add index library_element org_id-folder_uid-name-kind" grafana | logger=migrator t=2025-06-16T14:56:22.081246195Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_uid-name-kind" duration=1.500848ms grafana | logger=migrator t=2025-06-16T14:56:22.084802157Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" grafana | logger=migrator t=2025-06-16T14:56:22.085144151Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=341.394µs grafana | logger=migrator t=2025-06-16T14:56:22.090622306Z level=info msg="Executing migration" id="create data_keys table" grafana | logger=migrator t=2025-06-16T14:56:22.091676739Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.062413ms grafana | logger=migrator t=2025-06-16T14:56:22.095274881Z level=info msg="Executing migration" id="create secrets table" grafana | logger=migrator t=2025-06-16T14:56:22.096174002Z level=info msg="Migration successfully executed" id="create secrets table" duration=898.271µs grafana | logger=migrator t=2025-06-16T14:56:22.099590102Z level=info msg="Executing migration" id="rename data_keys name column to id" grafana | logger=migrator t=2025-06-16T14:56:22.133117478Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=33.526956ms grafana | logger=migrator t=2025-06-16T14:56:22.137720402Z level=info msg="Executing migration" id="add name column into data_keys" grafana | logger=migrator t=2025-06-16T14:56:22.143039345Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=5.318463ms grafana | logger=migrator t=2025-06-16T14:56:22.146730369Z level=info msg="Executing migration" id="copy data_keys id column values into name" grafana | logger=migrator t=2025-06-16T14:56:22.146970022Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=238.843µs grafana | logger=migrator t=2025-06-16T14:56:22.150547934Z level=info msg="Executing migration" id="rename data_keys name column to label" grafana | logger=migrator t=2025-06-16T14:56:22.18490453Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=34.355886ms grafana | logger=migrator t=2025-06-16T14:56:22.188511212Z level=info msg="Executing migration" id="rename data_keys id column back to name" grafana | logger=migrator t=2025-06-16T14:56:22.215953377Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=27.440875ms grafana | logger=migrator t=2025-06-16T14:56:22.220720893Z level=info msg="Executing migration" id="create kv_store table v1" grafana | logger=migrator t=2025-06-16T14:56:22.221613644Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=893.17µs grafana | logger=migrator t=2025-06-16T14:56:22.225230386Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" grafana | logger=migrator t=2025-06-16T14:56:22.226088136Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=857.93µs grafana | logger=migrator t=2025-06-16T14:56:22.229512537Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" grafana | logger=migrator t=2025-06-16T14:56:22.229835431Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=322.444µs grafana | logger=migrator t=2025-06-16T14:56:22.23399544Z level=info msg="Executing migration" id="create permission table" grafana | logger=migrator t=2025-06-16T14:56:22.2348769Z level=info msg="Migration successfully executed" id="create permission table" duration=881.17µs grafana | logger=migrator t=2025-06-16T14:56:22.238702025Z level=info msg="Executing migration" id="add unique index permission.role_id" grafana | logger=migrator t=2025-06-16T14:56:22.240478527Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.779252ms grafana | logger=migrator t=2025-06-16T14:56:22.244563215Z level=info msg="Executing migration" id="add unique index role_id_action_scope" grafana | logger=migrator t=2025-06-16T14:56:22.245773179Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.209944ms grafana | logger=migrator t=2025-06-16T14:56:22.250314673Z level=info msg="Executing migration" id="create role table" grafana | logger=migrator t=2025-06-16T14:56:22.251518337Z level=info msg="Migration successfully executed" id="create role table" duration=1.203234ms grafana | logger=migrator t=2025-06-16T14:56:22.25518548Z level=info msg="Executing migration" id="add column display_name" grafana | logger=migrator t=2025-06-16T14:56:22.262735179Z level=info msg="Migration successfully executed" id="add column display_name" duration=7.548969ms grafana | logger=migrator t=2025-06-16T14:56:22.266231271Z level=info msg="Executing migration" id="add column group_name" grafana | logger=migrator t=2025-06-16T14:56:22.271921428Z level=info msg="Migration successfully executed" id="add column group_name" duration=5.688587ms grafana | logger=migrator t=2025-06-16T14:56:22.277367282Z level=info msg="Executing migration" id="add index role.org_id" grafana | logger=migrator t=2025-06-16T14:56:22.278938521Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=1.570909ms grafana | logger=migrator t=2025-06-16T14:56:22.282654995Z level=info msg="Executing migration" id="add unique index role_org_id_name" grafana | logger=migrator t=2025-06-16T14:56:22.283818889Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.163384ms grafana | logger=migrator t=2025-06-16T14:56:22.287235159Z level=info msg="Executing migration" id="add index role_org_id_uid" grafana | logger=migrator t=2025-06-16T14:56:22.288404513Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.168734ms grafana | logger=migrator t=2025-06-16T14:56:22.292876055Z level=info msg="Executing migration" id="create team role table" grafana | logger=migrator t=2025-06-16T14:56:22.293932258Z level=info msg="Migration successfully executed" id="create team role table" duration=1.055233ms grafana | logger=migrator t=2025-06-16T14:56:22.30002839Z level=info msg="Executing migration" id="add index team_role.org_id" grafana | logger=migrator t=2025-06-16T14:56:22.302023303Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.993673ms grafana | logger=migrator t=2025-06-16T14:56:22.306254093Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" grafana | logger=migrator t=2025-06-16T14:56:22.307905683Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.65238ms grafana | logger=migrator t=2025-06-16T14:56:22.312501688Z level=info msg="Executing migration" id="add index team_role.team_id" grafana | logger=migrator t=2025-06-16T14:56:22.313628761Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.126953ms grafana | logger=migrator t=2025-06-16T14:56:22.317057961Z level=info msg="Executing migration" id="create user role table" grafana | logger=migrator t=2025-06-16T14:56:22.317999182Z level=info msg="Migration successfully executed" id="create user role table" duration=940.141µs grafana | logger=migrator t=2025-06-16T14:56:22.321523634Z level=info msg="Executing migration" id="add index user_role.org_id" grafana | logger=migrator t=2025-06-16T14:56:22.322715928Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.191214ms grafana | logger=migrator t=2025-06-16T14:56:22.328392685Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" grafana | logger=migrator t=2025-06-16T14:56:22.329921543Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.528088ms grafana | logger=migrator t=2025-06-16T14:56:22.333318333Z level=info msg="Executing migration" id="add index user_role.user_id" grafana | logger=migrator t=2025-06-16T14:56:22.335824993Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=2.50599ms grafana | logger=migrator t=2025-06-16T14:56:22.33985869Z level=info msg="Executing migration" id="create builtin role table" grafana | logger=migrator t=2025-06-16T14:56:22.341364898Z level=info msg="Migration successfully executed" id="create builtin role table" duration=1.505598ms grafana | logger=migrator t=2025-06-16T14:56:22.345992633Z level=info msg="Executing migration" id="add index builtin_role.role_id" grafana | logger=migrator t=2025-06-16T14:56:22.346837643Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=844.07µs grafana | logger=migrator t=2025-06-16T14:56:22.351031203Z level=info msg="Executing migration" id="add index builtin_role.name" grafana | logger=migrator t=2025-06-16T14:56:22.351883063Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=851.71µs grafana | logger=migrator t=2025-06-16T14:56:22.356044472Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" grafana | logger=migrator t=2025-06-16T14:56:22.366566796Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=10.522564ms grafana | logger=migrator t=2025-06-16T14:56:22.372192512Z level=info msg="Executing migration" id="add index builtin_role.org_id" grafana | logger=migrator t=2025-06-16T14:56:22.374171636Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.978154ms grafana | logger=migrator t=2025-06-16T14:56:22.378119382Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" grafana | logger=migrator t=2025-06-16T14:56:22.379355297Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.247685ms grafana | logger=migrator t=2025-06-16T14:56:22.382752297Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" grafana | logger=migrator t=2025-06-16T14:56:22.383954611Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.202324ms grafana | logger=migrator t=2025-06-16T14:56:22.388189062Z level=info msg="Executing migration" id="add unique index role.uid" grafana | logger=migrator t=2025-06-16T14:56:22.389486017Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.296005ms grafana | logger=migrator t=2025-06-16T14:56:22.393498874Z level=info msg="Executing migration" id="create seed assignment table" grafana | logger=migrator t=2025-06-16T14:56:22.395278935Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=1.778111ms grafana | logger=migrator t=2025-06-16T14:56:22.39905545Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" grafana | logger=migrator t=2025-06-16T14:56:22.401027033Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.971423ms grafana | logger=migrator t=2025-06-16T14:56:22.405283363Z level=info msg="Executing migration" id="add column hidden to role table" grafana | logger=migrator t=2025-06-16T14:56:22.411500537Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=6.216524ms grafana | logger=migrator t=2025-06-16T14:56:22.414727715Z level=info msg="Executing migration" id="permission kind migration" grafana | logger=migrator t=2025-06-16T14:56:22.423339177Z level=info msg="Migration successfully executed" id="permission kind migration" duration=8.610762ms grafana | logger=migrator t=2025-06-16T14:56:22.426684547Z level=info msg="Executing migration" id="permission attribute migration" grafana | logger=migrator t=2025-06-16T14:56:22.432538056Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=5.853039ms grafana | logger=migrator t=2025-06-16T14:56:22.436466122Z level=info msg="Executing migration" id="permission identifier migration" grafana | logger=migrator t=2025-06-16T14:56:22.445257946Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=8.786054ms grafana | logger=migrator t=2025-06-16T14:56:22.448692476Z level=info msg="Executing migration" id="add permission identifier index" grafana | logger=migrator t=2025-06-16T14:56:22.449491806Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=798.79µs grafana | logger=migrator t=2025-06-16T14:56:22.452562642Z level=info msg="Executing migration" id="add permission action scope role_id index" grafana | logger=migrator t=2025-06-16T14:56:22.453392812Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=829.76µs grafana | logger=migrator t=2025-06-16T14:56:22.459242581Z level=info msg="Executing migration" id="remove permission role_id action scope index" grafana | logger=migrator t=2025-06-16T14:56:22.460039141Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=796.21µs grafana | logger=migrator t=2025-06-16T14:56:22.463618243Z level=info msg="Executing migration" id="add group mapping UID column to user_role table" grafana | logger=migrator t=2025-06-16T14:56:22.474710904Z level=info msg="Migration successfully executed" id="add group mapping UID column to user_role table" duration=11.092731ms grafana | logger=migrator t=2025-06-16T14:56:22.479265008Z level=info msg="Executing migration" id="add user_role org ID, user ID, role ID, group mapping UID index" grafana | logger=migrator t=2025-06-16T14:56:22.480591073Z level=info msg="Migration successfully executed" id="add user_role org ID, user ID, role ID, group mapping UID index" duration=1.327936ms grafana | logger=migrator t=2025-06-16T14:56:22.484313177Z level=info msg="Executing migration" id="remove user_role org ID, user ID, role ID index" grafana | logger=migrator t=2025-06-16T14:56:22.485453071Z level=info msg="Migration successfully executed" id="remove user_role org ID, user ID, role ID index" duration=1.139754ms grafana | logger=migrator t=2025-06-16T14:56:22.489047153Z level=info msg="Executing migration" id="create query_history table v1" grafana | logger=migrator t=2025-06-16T14:56:22.490101535Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=1.054892ms grafana | logger=migrator t=2025-06-16T14:56:22.494901582Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" grafana | logger=migrator t=2025-06-16T14:56:22.496752824Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.850012ms grafana | logger=migrator t=2025-06-16T14:56:22.501069895Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" grafana | logger=migrator t=2025-06-16T14:56:22.501153106Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=84.501µs grafana | logger=migrator t=2025-06-16T14:56:22.505036672Z level=info msg="Executing migration" id="create query_history_details table v1" grafana | logger=migrator t=2025-06-16T14:56:22.506054214Z level=info msg="Migration successfully executed" id="create query_history_details table v1" duration=1.017022ms grafana | logger=migrator t=2025-06-16T14:56:22.509440764Z level=info msg="Executing migration" id="rbac disabled migrator" grafana | logger=migrator t=2025-06-16T14:56:22.509539345Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=99.281µs grafana | logger=migrator t=2025-06-16T14:56:22.514030528Z level=info msg="Executing migration" id="teams permissions migration" grafana | logger=migrator t=2025-06-16T14:56:22.514832458Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=802.25µs grafana | logger=migrator t=2025-06-16T14:56:22.518804125Z level=info msg="Executing migration" id="dashboard permissions" grafana | logger=migrator t=2025-06-16T14:56:22.519822577Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=1.019782ms grafana | logger=migrator t=2025-06-16T14:56:22.523592361Z level=info msg="Executing migration" id="dashboard permissions uid scopes" grafana | logger=migrator t=2025-06-16T14:56:22.524286339Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=694.278µs grafana | logger=migrator t=2025-06-16T14:56:22.52773636Z level=info msg="Executing migration" id="drop managed folder create actions" grafana | logger=migrator t=2025-06-16T14:56:22.528025163Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=288.533µs grafana | logger=migrator t=2025-06-16T14:56:22.532997902Z level=info msg="Executing migration" id="alerting notification permissions" grafana | logger=migrator t=2025-06-16T14:56:22.534119456Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=1.125304ms grafana | logger=migrator t=2025-06-16T14:56:22.538378806Z level=info msg="Executing migration" id="create query_history_star table v1" grafana | logger=migrator t=2025-06-16T14:56:22.539389488Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=1.009782ms grafana | logger=migrator t=2025-06-16T14:56:22.543591198Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" grafana | logger=migrator t=2025-06-16T14:56:22.544777602Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.186414ms grafana | logger=migrator t=2025-06-16T14:56:22.549589158Z level=info msg="Executing migration" id="add column org_id in query_history_star" grafana | logger=migrator t=2025-06-16T14:56:22.558091729Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=8.501871ms grafana | logger=migrator t=2025-06-16T14:56:22.561354477Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" grafana | logger=migrator t=2025-06-16T14:56:22.561375088Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=21.111µs grafana | logger=migrator t=2025-06-16T14:56:22.565471046Z level=info msg="Executing migration" id="create correlation table v1" grafana | logger=migrator t=2025-06-16T14:56:22.56667524Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.203434ms grafana | logger=migrator t=2025-06-16T14:56:22.569941709Z level=info msg="Executing migration" id="add index correlations.uid" grafana | logger=migrator t=2025-06-16T14:56:22.571068462Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.126263ms grafana | logger=migrator t=2025-06-16T14:56:22.576453696Z level=info msg="Executing migration" id="add index correlations.source_uid" grafana | logger=migrator t=2025-06-16T14:56:22.57768048Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.225734ms grafana | logger=migrator t=2025-06-16T14:56:22.581120001Z level=info msg="Executing migration" id="add correlation config column" grafana | logger=migrator t=2025-06-16T14:56:22.589722013Z level=info msg="Migration successfully executed" id="add correlation config column" duration=8.595841ms grafana | logger=migrator t=2025-06-16T14:56:22.593190573Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" grafana | logger=migrator t=2025-06-16T14:56:22.594493499Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.302526ms grafana | logger=migrator t=2025-06-16T14:56:22.59964319Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" grafana | logger=migrator t=2025-06-16T14:56:22.600845974Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.202384ms grafana | logger=migrator t=2025-06-16T14:56:22.604695029Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-16T14:56:22.628938226Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=24.242637ms grafana | logger=migrator t=2025-06-16T14:56:22.634444301Z level=info msg="Executing migration" id="create correlation v2" grafana | logger=migrator t=2025-06-16T14:56:22.635612745Z level=info msg="Migration successfully executed" id="create correlation v2" duration=1.163414ms grafana | logger=migrator t=2025-06-16T14:56:22.639282908Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" grafana | logger=migrator t=2025-06-16T14:56:22.640423421Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.139853ms grafana | logger=migrator t=2025-06-16T14:56:22.645057147Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" grafana | logger=migrator t=2025-06-16T14:56:22.646273541Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.216354ms grafana | logger=migrator t=2025-06-16T14:56:22.651473672Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" grafana | logger=migrator t=2025-06-16T14:56:22.652749097Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.274345ms grafana | logger=migrator t=2025-06-16T14:56:22.656275209Z level=info msg="Executing migration" id="copy correlation v1 to v2" grafana | logger=migrator t=2025-06-16T14:56:22.656722084Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=450.485µs grafana | logger=migrator t=2025-06-16T14:56:22.660480058Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" grafana | logger=migrator t=2025-06-16T14:56:22.661374188Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=893.73µs grafana | logger=migrator t=2025-06-16T14:56:22.666411488Z level=info msg="Executing migration" id="add provisioning column" grafana | logger=migrator t=2025-06-16T14:56:22.675248192Z level=info msg="Migration successfully executed" id="add provisioning column" duration=8.836164ms grafana | logger=migrator t=2025-06-16T14:56:22.67849286Z level=info msg="Executing migration" id="add type column" grafana | logger=migrator t=2025-06-16T14:56:22.687141153Z level=info msg="Migration successfully executed" id="add type column" duration=8.647733ms grafana | logger=migrator t=2025-06-16T14:56:22.690678664Z level=info msg="Executing migration" id="create entity_events table" grafana | logger=migrator t=2025-06-16T14:56:22.691595335Z level=info msg="Migration successfully executed" id="create entity_events table" duration=880.451µs grafana | logger=migrator t=2025-06-16T14:56:22.69790025Z level=info msg="Executing migration" id="create dashboard public config v1" grafana | logger=migrator t=2025-06-16T14:56:22.698959592Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.069163ms grafana | logger=migrator t=2025-06-16T14:56:22.702427323Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2025-06-16T14:56:22.703139391Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2025-06-16T14:56:22.706759294Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2025-06-16T14:56:22.707458562Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2025-06-16T14:56:22.710952414Z level=info msg="Executing migration" id="Drop old dashboard public config table" grafana | logger=migrator t=2025-06-16T14:56:22.711725973Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=773.239µs grafana | logger=migrator t=2025-06-16T14:56:22.717293359Z level=info msg="Executing migration" id="recreate dashboard public config v1" grafana | logger=migrator t=2025-06-16T14:56:22.718558684Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.263565ms grafana | logger=migrator t=2025-06-16T14:56:22.722928255Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2025-06-16T14:56:22.724662366Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.733681ms grafana | logger=migrator t=2025-06-16T14:56:22.731864461Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2025-06-16T14:56:22.733983376Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=2.118475ms grafana | logger=migrator t=2025-06-16T14:56:22.737211254Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" grafana | logger=migrator t=2025-06-16T14:56:22.738501949Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.291115ms grafana | logger=migrator t=2025-06-16T14:56:22.741868939Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" grafana | logger=migrator t=2025-06-16T14:56:22.74282802Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=953.761µs grafana | logger=migrator t=2025-06-16T14:56:22.748455557Z level=info msg="Executing migration" id="Drop public config table" grafana | logger=migrator t=2025-06-16T14:56:22.749228446Z level=info msg="Migration successfully executed" id="Drop public config table" duration=772.399µs grafana | logger=migrator t=2025-06-16T14:56:22.752367683Z level=info msg="Executing migration" id="Recreate dashboard public config v2" grafana | logger=migrator t=2025-06-16T14:56:22.753524847Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.157344ms grafana | logger=migrator t=2025-06-16T14:56:22.758544646Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" grafana | logger=migrator t=2025-06-16T14:56:22.760257677Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.71261ms grafana | logger=migrator t=2025-06-16T14:56:22.763845369Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" grafana | logger=migrator t=2025-06-16T14:56:22.764954722Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.102153ms grafana | logger=migrator t=2025-06-16T14:56:22.767926167Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" grafana | logger=migrator t=2025-06-16T14:56:22.76905024Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.123493ms grafana | logger=migrator t=2025-06-16T14:56:22.773578794Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" grafana | logger=migrator t=2025-06-16T14:56:22.79779225Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=24.213426ms grafana | logger=migrator t=2025-06-16T14:56:22.801449203Z level=info msg="Executing migration" id="add annotations_enabled column" grafana | logger=migrator t=2025-06-16T14:56:22.810345198Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=8.894665ms grafana | logger=migrator t=2025-06-16T14:56:22.813352524Z level=info msg="Executing migration" id="add time_selection_enabled column" grafana | logger=migrator t=2025-06-16T14:56:22.819729019Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=6.376065ms grafana | logger=migrator t=2025-06-16T14:56:22.824526995Z level=info msg="Executing migration" id="delete orphaned public dashboards" grafana | logger=migrator t=2025-06-16T14:56:22.824758118Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=231.313µs grafana | logger=migrator t=2025-06-16T14:56:22.827944146Z level=info msg="Executing migration" id="add share column" grafana | logger=migrator t=2025-06-16T14:56:22.838924216Z level=info msg="Migration successfully executed" id="add share column" duration=10.97446ms grafana | logger=migrator t=2025-06-16T14:56:22.842178284Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" grafana | logger=migrator t=2025-06-16T14:56:22.842351056Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=173.032µs grafana | logger=migrator t=2025-06-16T14:56:22.84692063Z level=info msg="Executing migration" id="create file table" grafana | logger=migrator t=2025-06-16T14:56:22.847863151Z level=info msg="Migration successfully executed" id="create file table" duration=942.331µs grafana | logger=migrator t=2025-06-16T14:56:22.850991828Z level=info msg="Executing migration" id="file table idx: path natural pk" grafana | logger=migrator t=2025-06-16T14:56:22.852215853Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.222865ms grafana | logger=migrator t=2025-06-16T14:56:22.856278001Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" grafana | logger=migrator t=2025-06-16T14:56:22.858147253Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.868852ms grafana | logger=migrator t=2025-06-16T14:56:22.863207662Z level=info msg="Executing migration" id="create file_meta table" grafana | logger=migrator t=2025-06-16T14:56:22.864031922Z level=info msg="Migration successfully executed" id="create file_meta table" duration=823.91µs grafana | logger=migrator t=2025-06-16T14:56:22.867453723Z level=info msg="Executing migration" id="file table idx: path key" grafana | logger=migrator t=2025-06-16T14:56:22.868734508Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.280575ms grafana | logger=migrator t=2025-06-16T14:56:22.874115132Z level=info msg="Executing migration" id="set path collation in file table" grafana | logger=migrator t=2025-06-16T14:56:22.874142142Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=33.48µs grafana | logger=migrator t=2025-06-16T14:56:22.878725736Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" grafana | logger=migrator t=2025-06-16T14:56:22.878768447Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=38.08µs grafana | logger=migrator t=2025-06-16T14:56:22.884209541Z level=info msg="Executing migration" id="managed permissions migration" grafana | logger=migrator t=2025-06-16T14:56:22.885228943Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=1.019002ms grafana | logger=migrator t=2025-06-16T14:56:22.888812635Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" grafana | logger=migrator t=2025-06-16T14:56:22.88922282Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=414.645µs grafana | logger=migrator t=2025-06-16T14:56:22.892795562Z level=info msg="Executing migration" id="RBAC action name migrator" grafana | logger=migrator t=2025-06-16T14:56:22.89431743Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.529408ms grafana | logger=migrator t=2025-06-16T14:56:22.897618739Z level=info msg="Executing migration" id="Add UID column to playlist" grafana | logger=migrator t=2025-06-16T14:56:22.907239563Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=9.620154ms grafana | logger=migrator t=2025-06-16T14:56:22.912644877Z level=info msg="Executing migration" id="Update uid column values in playlist" grafana | logger=migrator t=2025-06-16T14:56:22.912828539Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=183.122µs grafana | logger=migrator t=2025-06-16T14:56:22.916117148Z level=info msg="Executing migration" id="Add index for uid in playlist" grafana | logger=migrator t=2025-06-16T14:56:22.917345492Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.228504ms grafana | logger=migrator t=2025-06-16T14:56:22.920518Z level=info msg="Executing migration" id="update group index for alert rules" grafana | logger=migrator t=2025-06-16T14:56:22.921234018Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=705.968µs grafana | logger=migrator t=2025-06-16T14:56:22.925989384Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" grafana | logger=migrator t=2025-06-16T14:56:22.926379159Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=370.955µs grafana | logger=migrator t=2025-06-16T14:56:22.933305331Z level=info msg="Executing migration" id="admin only folder/dashboard permission" grafana | logger=migrator t=2025-06-16T14:56:22.933851297Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=545.936µs grafana | logger=migrator t=2025-06-16T14:56:22.937155876Z level=info msg="Executing migration" id="add action column to seed_assignment" grafana | logger=migrator t=2025-06-16T14:56:22.949091267Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=11.941471ms grafana | logger=migrator t=2025-06-16T14:56:22.953350788Z level=info msg="Executing migration" id="add scope column to seed_assignment" grafana | logger=migrator t=2025-06-16T14:56:22.962459865Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=9.109917ms grafana | logger=migrator t=2025-06-16T14:56:22.967635696Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" grafana | logger=migrator t=2025-06-16T14:56:22.968745339Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.109853ms grafana | logger=migrator t=2025-06-16T14:56:22.972096049Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" grafana | logger=migrator t=2025-06-16T14:56:23.050778278Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=78.678679ms grafana | logger=migrator t=2025-06-16T14:56:23.055372192Z level=info msg="Executing migration" id="add unique index builtin_role_name back" grafana | logger=migrator t=2025-06-16T14:56:23.056313773Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=941.271µs grafana | logger=migrator t=2025-06-16T14:56:23.061718207Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" grafana | logger=migrator t=2025-06-16T14:56:23.063586379Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.867632ms grafana | logger=migrator t=2025-06-16T14:56:23.067238942Z level=info msg="Executing migration" id="add primary key to seed_assigment" grafana | logger=migrator t=2025-06-16T14:56:23.101589028Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=34.347136ms grafana | logger=migrator t=2025-06-16T14:56:23.10518934Z level=info msg="Executing migration" id="add origin column to seed_assignment" grafana | logger=migrator t=2025-06-16T14:56:23.111666627Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=6.476977ms grafana | logger=migrator t=2025-06-16T14:56:23.11616369Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" grafana | logger=migrator t=2025-06-16T14:56:23.116420983Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=251.843µs grafana | logger=migrator t=2025-06-16T14:56:23.118913322Z level=info msg="Executing migration" id="prevent seeding OnCall access" grafana | logger=migrator t=2025-06-16T14:56:23.119056034Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=142.502µs grafana | logger=migrator t=2025-06-16T14:56:23.121814256Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" grafana | logger=migrator t=2025-06-16T14:56:23.122222721Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=408.315µs grafana | logger=migrator t=2025-06-16T14:56:23.124751501Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" grafana | logger=migrator t=2025-06-16T14:56:23.125148806Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=396.975µs grafana | logger=migrator t=2025-06-16T14:56:23.131393799Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" grafana | logger=migrator t=2025-06-16T14:56:23.131606492Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=213.003µs grafana | logger=migrator t=2025-06-16T14:56:23.134533656Z level=info msg="Executing migration" id="create folder table" grafana | logger=migrator t=2025-06-16T14:56:23.135462797Z level=info msg="Migration successfully executed" id="create folder table" duration=928.821µs grafana | logger=migrator t=2025-06-16T14:56:23.138350341Z level=info msg="Executing migration" id="Add index for parent_uid" grafana | logger=migrator t=2025-06-16T14:56:23.139450884Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.099873ms grafana | logger=migrator t=2025-06-16T14:56:23.142389839Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" grafana | logger=migrator t=2025-06-16T14:56:23.143484362Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.094303ms grafana | logger=migrator t=2025-06-16T14:56:23.148247418Z level=info msg="Executing migration" id="Update folder title length" grafana | logger=migrator t=2025-06-16T14:56:23.148268239Z level=info msg="Migration successfully executed" id="Update folder title length" duration=21.11µs grafana | logger=migrator t=2025-06-16T14:56:23.151123042Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" grafana | logger=migrator t=2025-06-16T14:56:23.152212675Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.089553ms grafana | logger=migrator t=2025-06-16T14:56:23.156587847Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" grafana | logger=migrator t=2025-06-16T14:56:23.15774881Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.155893ms grafana | logger=migrator t=2025-06-16T14:56:23.162955602Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" grafana | logger=migrator t=2025-06-16T14:56:23.164790653Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.834591ms grafana | logger=migrator t=2025-06-16T14:56:23.171324261Z level=info msg="Executing migration" id="Sync dashboard and folder table" grafana | logger=migrator t=2025-06-16T14:56:23.171760346Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=435.865µs grafana | logger=migrator t=2025-06-16T14:56:23.174317766Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" grafana | logger=migrator t=2025-06-16T14:56:23.174584159Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=266.383µs grafana | logger=migrator t=2025-06-16T14:56:23.180108294Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" grafana | logger=migrator t=2025-06-16T14:56:23.181234838Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=1.125834ms grafana | logger=migrator t=2025-06-16T14:56:23.184359404Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" grafana | logger=migrator t=2025-06-16T14:56:23.186456879Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=2.096955ms grafana | logger=migrator t=2025-06-16T14:56:23.189620177Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" grafana | logger=migrator t=2025-06-16T14:56:23.191444678Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=1.824602ms grafana | logger=migrator t=2025-06-16T14:56:23.197280447Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" grafana | logger=migrator t=2025-06-16T14:56:23.198489491Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=1.209214ms grafana | logger=migrator t=2025-06-16T14:56:23.204800776Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" grafana | logger=migrator t=2025-06-16T14:56:23.205960719Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.159493ms grafana | logger=migrator t=2025-06-16T14:56:23.210171129Z level=info msg="Executing migration" id="Remove unique index UQE_folder_org_id_parent_uid_title" grafana | logger=migrator t=2025-06-16T14:56:23.211203381Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_org_id_parent_uid_title" duration=1.031902ms grafana | logger=migrator t=2025-06-16T14:56:23.214094215Z level=info msg="Executing migration" id="create anon_device table" grafana | logger=migrator t=2025-06-16T14:56:23.215026656Z level=info msg="Migration successfully executed" id="create anon_device table" duration=932.031µs grafana | logger=migrator t=2025-06-16T14:56:23.218620259Z level=info msg="Executing migration" id="add unique index anon_device.device_id" grafana | logger=migrator t=2025-06-16T14:56:23.219687851Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.067482ms grafana | logger=migrator t=2025-06-16T14:56:23.222449004Z level=info msg="Executing migration" id="add index anon_device.updated_at" grafana | logger=migrator t=2025-06-16T14:56:23.223601868Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.152584ms grafana | logger=migrator t=2025-06-16T14:56:23.227258451Z level=info msg="Executing migration" id="create signing_key table" grafana | logger=migrator t=2025-06-16T14:56:23.228111091Z level=info msg="Migration successfully executed" id="create signing_key table" duration=852.6µs grafana | logger=migrator t=2025-06-16T14:56:23.232082248Z level=info msg="Executing migration" id="add unique index signing_key.key_id" grafana | logger=migrator t=2025-06-16T14:56:23.234546987Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=2.463169ms grafana | logger=migrator t=2025-06-16T14:56:23.23820915Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" grafana | logger=migrator t=2025-06-16T14:56:23.2407345Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=2.51551ms grafana | logger=migrator t=2025-06-16T14:56:23.246590639Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" grafana | logger=migrator t=2025-06-16T14:56:23.247096875Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=506.626µs grafana | logger=migrator t=2025-06-16T14:56:23.252143355Z level=info msg="Executing migration" id="Add folder_uid for dashboard" grafana | logger=migrator t=2025-06-16T14:56:23.262356525Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=10.213ms grafana | logger=migrator t=2025-06-16T14:56:23.266373053Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" grafana | logger=migrator t=2025-06-16T14:56:23.267079511Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=707.168µs grafana | logger=migrator t=2025-06-16T14:56:23.270228938Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" grafana | logger=migrator t=2025-06-16T14:56:23.270278989Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=44.881µs grafana | logger=migrator t=2025-06-16T14:56:23.276451542Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" grafana | logger=migrator t=2025-06-16T14:56:23.277729467Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=1.277225ms grafana | logger=migrator t=2025-06-16T14:56:23.281193168Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" grafana | logger=migrator t=2025-06-16T14:56:23.281304149Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=112.161µs grafana | logger=migrator t=2025-06-16T14:56:23.286123206Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" grafana | logger=migrator t=2025-06-16T14:56:23.288704556Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=2.59275ms grafana | logger=migrator t=2025-06-16T14:56:23.295001621Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" grafana | logger=migrator t=2025-06-16T14:56:23.296264206Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=1.261695ms grafana | logger=migrator t=2025-06-16T14:56:23.299613125Z level=info msg="Executing migration" id="Remove unique index for dashboard_org_id_folder_uid_title_is_folder" grafana | logger=migrator t=2025-06-16T14:56:23.30084327Z level=info msg="Migration successfully executed" id="Remove unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.229865ms grafana | logger=migrator t=2025-06-16T14:56:23.305183201Z level=info msg="Executing migration" id="create sso_setting table" grafana | logger=migrator t=2025-06-16T14:56:23.306366395Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.181804ms grafana | logger=migrator t=2025-06-16T14:56:23.311945581Z level=info msg="Executing migration" id="copy kvstore migration status to each org" grafana | logger=migrator t=2025-06-16T14:56:23.312850301Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=905.26µs grafana | logger=migrator t=2025-06-16T14:56:23.318174264Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" grafana | logger=migrator t=2025-06-16T14:56:23.318773901Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=600.767µs grafana | logger=migrator t=2025-06-16T14:56:23.323060712Z level=info msg="Executing migration" id="managed dashboard permissions annotation actions migration" grafana | logger=migrator t=2025-06-16T14:56:23.324206815Z level=info msg="Migration successfully executed" id="managed dashboard permissions annotation actions migration" duration=1.145303ms grafana | logger=migrator t=2025-06-16T14:56:23.330869034Z level=info msg="Executing migration" id="create cloud_migration table v1" grafana | logger=migrator t=2025-06-16T14:56:23.332550374Z level=info msg="Migration successfully executed" id="create cloud_migration table v1" duration=1.68131ms grafana | logger=migrator t=2025-06-16T14:56:23.336561361Z level=info msg="Executing migration" id="create cloud_migration_run table v1" grafana | logger=migrator t=2025-06-16T14:56:23.338431663Z level=info msg="Migration successfully executed" id="create cloud_migration_run table v1" duration=1.879562ms grafana | logger=migrator t=2025-06-16T14:56:23.342057086Z level=info msg="Executing migration" id="add stack_id column" grafana | logger=migrator t=2025-06-16T14:56:23.348971058Z level=info msg="Migration successfully executed" id="add stack_id column" duration=6.907852ms grafana | logger=migrator t=2025-06-16T14:56:23.355230651Z level=info msg="Executing migration" id="add region_slug column" grafana | logger=migrator t=2025-06-16T14:56:23.364868155Z level=info msg="Migration successfully executed" id="add region_slug column" duration=9.636764ms grafana | logger=migrator t=2025-06-16T14:56:23.370934597Z level=info msg="Executing migration" id="add cluster_slug column" grafana | logger=migrator t=2025-06-16T14:56:23.379409237Z level=info msg="Migration successfully executed" id="add cluster_slug column" duration=8.47341ms grafana | logger=migrator t=2025-06-16T14:56:23.383169111Z level=info msg="Executing migration" id="add migration uid column" grafana | logger=migrator t=2025-06-16T14:56:23.39320297Z level=info msg="Migration successfully executed" id="add migration uid column" duration=10.033409ms grafana | logger=migrator t=2025-06-16T14:56:23.39914691Z level=info msg="Executing migration" id="Update uid column values for migration" grafana | logger=migrator t=2025-06-16T14:56:23.399358092Z level=info msg="Migration successfully executed" id="Update uid column values for migration" duration=210.792µs grafana | logger=migrator t=2025-06-16T14:56:23.403669393Z level=info msg="Executing migration" id="Add unique index migration_uid" grafana | logger=migrator t=2025-06-16T14:56:23.404939718Z level=info msg="Migration successfully executed" id="Add unique index migration_uid" duration=1.270345ms grafana | logger=migrator t=2025-06-16T14:56:23.408694522Z level=info msg="Executing migration" id="add migration run uid column" grafana | logger=migrator t=2025-06-16T14:56:23.418305656Z level=info msg="Migration successfully executed" id="add migration run uid column" duration=9.610504ms grafana | logger=migrator t=2025-06-16T14:56:23.427803478Z level=info msg="Executing migration" id="Update uid column values for migration run" grafana | logger=migrator t=2025-06-16T14:56:23.428133022Z level=info msg="Migration successfully executed" id="Update uid column values for migration run" duration=331.574µs grafana | logger=migrator t=2025-06-16T14:56:23.432921968Z level=info msg="Executing migration" id="Add unique index migration_run_uid" grafana | logger=migrator t=2025-06-16T14:56:23.434916442Z level=info msg="Migration successfully executed" id="Add unique index migration_run_uid" duration=1.993964ms grafana | logger=migrator t=2025-06-16T14:56:23.438327042Z level=info msg="Executing migration" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-16T14:56:23.466776778Z level=info msg="Migration successfully executed" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" duration=28.450146ms grafana | logger=migrator t=2025-06-16T14:56:23.469659542Z level=info msg="Executing migration" id="create cloud_migration_session v2" grafana | logger=migrator t=2025-06-16T14:56:23.470542592Z level=info msg="Migration successfully executed" id="create cloud_migration_session v2" duration=886.59µs grafana | logger=migrator t=2025-06-16T14:56:23.473610429Z level=info msg="Executing migration" id="create index UQE_cloud_migration_session_uid - v2" grafana | logger=migrator t=2025-06-16T14:56:23.474796023Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_session_uid - v2" duration=1.185654ms grafana | logger=migrator t=2025-06-16T14:56:23.47798331Z level=info msg="Executing migration" id="copy cloud_migration_session v1 to v2" grafana | logger=migrator t=2025-06-16T14:56:23.478345684Z level=info msg="Migration successfully executed" id="copy cloud_migration_session v1 to v2" duration=362.194µs grafana | logger=migrator t=2025-06-16T14:56:23.480595751Z level=info msg="Executing migration" id="drop cloud_migration_session_tmp_qwerty" grafana | logger=migrator t=2025-06-16T14:56:23.481521702Z level=info msg="Migration successfully executed" id="drop cloud_migration_session_tmp_qwerty" duration=925.431µs grafana | logger=migrator t=2025-06-16T14:56:23.485427048Z level=info msg="Executing migration" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-16T14:56:23.509928897Z level=info msg="Migration successfully executed" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" duration=24.500459ms grafana | logger=migrator t=2025-06-16T14:56:23.512900422Z level=info msg="Executing migration" id="create cloud_migration_snapshot v2" grafana | logger=migrator t=2025-06-16T14:56:23.51360081Z level=info msg="Migration successfully executed" id="create cloud_migration_snapshot v2" duration=699.918µs grafana | logger=migrator t=2025-06-16T14:56:23.516489755Z level=info msg="Executing migration" id="create index UQE_cloud_migration_snapshot_uid - v2" grafana | logger=migrator t=2025-06-16T14:56:23.517358435Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_snapshot_uid - v2" duration=872.56µs grafana | logger=migrator t=2025-06-16T14:56:23.520294779Z level=info msg="Executing migration" id="copy cloud_migration_snapshot v1 to v2" grafana | logger=migrator t=2025-06-16T14:56:23.520525032Z level=info msg="Migration successfully executed" id="copy cloud_migration_snapshot v1 to v2" duration=230.083µs grafana | logger=migrator t=2025-06-16T14:56:23.523387746Z level=info msg="Executing migration" id="drop cloud_migration_snapshot_tmp_qwerty" grafana | logger=migrator t=2025-06-16T14:56:23.524674471Z level=info msg="Migration successfully executed" id="drop cloud_migration_snapshot_tmp_qwerty" duration=1.286225ms grafana | logger=migrator t=2025-06-16T14:56:23.528193463Z level=info msg="Executing migration" id="add snapshot upload_url column" grafana | logger=migrator t=2025-06-16T14:56:23.538826638Z level=info msg="Migration successfully executed" id="add snapshot upload_url column" duration=10.633145ms grafana | logger=migrator t=2025-06-16T14:56:23.541899564Z level=info msg="Executing migration" id="add snapshot status column" grafana | logger=migrator t=2025-06-16T14:56:23.548777076Z level=info msg="Migration successfully executed" id="add snapshot status column" duration=6.876912ms grafana | logger=migrator t=2025-06-16T14:56:23.553063576Z level=info msg="Executing migration" id="add snapshot local_directory column" grafana | logger=migrator t=2025-06-16T14:56:23.563057854Z level=info msg="Migration successfully executed" id="add snapshot local_directory column" duration=9.993388ms grafana | logger=migrator t=2025-06-16T14:56:23.566170751Z level=info msg="Executing migration" id="add snapshot gms_snapshot_uid column" grafana | logger=migrator t=2025-06-16T14:56:23.575529711Z level=info msg="Migration successfully executed" id="add snapshot gms_snapshot_uid column" duration=9.35833ms grafana | logger=migrator t=2025-06-16T14:56:23.579595219Z level=info msg="Executing migration" id="add snapshot encryption_key column" grafana | logger=migrator t=2025-06-16T14:56:23.589139732Z level=info msg="Migration successfully executed" id="add snapshot encryption_key column" duration=9.535663ms grafana | logger=migrator t=2025-06-16T14:56:23.592126557Z level=info msg="Executing migration" id="add snapshot error_string column" grafana | logger=migrator t=2025-06-16T14:56:23.599021069Z level=info msg="Migration successfully executed" id="add snapshot error_string column" duration=6.892932ms grafana | logger=migrator t=2025-06-16T14:56:23.605024109Z level=info msg="Executing migration" id="create cloud_migration_resource table v1" grafana | logger=migrator t=2025-06-16T14:56:23.60594135Z level=info msg="Migration successfully executed" id="create cloud_migration_resource table v1" duration=916.831µs grafana | logger=migrator t=2025-06-16T14:56:23.608793084Z level=info msg="Executing migration" id="delete cloud_migration_snapshot.result column" grafana | logger=migrator t=2025-06-16T14:56:23.643988239Z level=info msg="Migration successfully executed" id="delete cloud_migration_snapshot.result column" duration=35.192655ms grafana | logger=migrator t=2025-06-16T14:56:23.647645032Z level=info msg="Executing migration" id="add cloud_migration_resource.name column" grafana | logger=migrator t=2025-06-16T14:56:23.661316464Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.name column" duration=13.671362ms grafana | logger=migrator t=2025-06-16T14:56:23.664477421Z level=info msg="Executing migration" id="add cloud_migration_resource.parent_name column" grafana | logger=migrator t=2025-06-16T14:56:23.674279937Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.parent_name column" duration=9.801746ms grafana | logger=migrator t=2025-06-16T14:56:23.67712409Z level=info msg="Executing migration" id="add cloud_migration_session.org_id column" grafana | logger=migrator t=2025-06-16T14:56:23.684039272Z level=info msg="Migration successfully executed" id="add cloud_migration_session.org_id column" duration=6.914592ms grafana | logger=migrator t=2025-06-16T14:56:23.687310731Z level=info msg="Executing migration" id="add cloud_migration_resource.error_code column" grafana | logger=migrator t=2025-06-16T14:56:23.697780344Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.error_code column" duration=10.469333ms grafana | logger=migrator t=2025-06-16T14:56:23.704360252Z level=info msg="Executing migration" id="increase resource_uid column length" grafana | logger=migrator t=2025-06-16T14:56:23.704471193Z level=info msg="Migration successfully executed" id="increase resource_uid column length" duration=112.871µs grafana | logger=migrator t=2025-06-16T14:56:23.709128758Z level=info msg="Executing migration" id="alter kv_store.value to longtext" grafana | logger=migrator t=2025-06-16T14:56:23.709179369Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=51.331µs grafana | logger=migrator t=2025-06-16T14:56:23.712804081Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" grafana | logger=migrator t=2025-06-16T14:56:23.726832557Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=14.027656ms grafana | logger=migrator t=2025-06-16T14:56:23.731272049Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" grafana | logger=migrator t=2025-06-16T14:56:23.738311722Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=7.038773ms grafana | logger=migrator t=2025-06-16T14:56:23.743351302Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" grafana | logger=migrator t=2025-06-16T14:56:23.743819067Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=467.465µs grafana | logger=migrator t=2025-06-16T14:56:23.747917656Z level=info msg="Executing migration" id="managed folder permissions alerting silences actions migration" grafana | logger=migrator t=2025-06-16T14:56:23.7482524Z level=info msg="Migration successfully executed" id="managed folder permissions alerting silences actions migration" duration=334.654µs grafana | logger=migrator t=2025-06-16T14:56:23.75335608Z level=info msg="Executing migration" id="add record column to alert_rule table" grafana | logger=migrator t=2025-06-16T14:56:23.767835791Z level=info msg="Migration successfully executed" id="add record column to alert_rule table" duration=14.479451ms grafana | logger=migrator t=2025-06-16T14:56:23.773135543Z level=info msg="Executing migration" id="add record column to alert_rule_version table" grafana | logger=migrator t=2025-06-16T14:56:23.782536604Z level=info msg="Migration successfully executed" id="add record column to alert_rule_version table" duration=9.399881ms grafana | logger=migrator t=2025-06-16T14:56:23.787129329Z level=info msg="Executing migration" id="add resolved_at column to alert_instance table" grafana | logger=migrator t=2025-06-16T14:56:23.800764539Z level=info msg="Migration successfully executed" id="add resolved_at column to alert_instance table" duration=13.6363ms grafana | logger=migrator t=2025-06-16T14:56:23.806070982Z level=info msg="Executing migration" id="add last_sent_at column to alert_instance table" grafana | logger=migrator t=2025-06-16T14:56:23.817865251Z level=info msg="Migration successfully executed" id="add last_sent_at column to alert_instance table" duration=11.805379ms grafana | logger=migrator t=2025-06-16T14:56:23.820999978Z level=info msg="Executing migration" id="Add scope to alert.notifications.receivers:read and alert.notifications.receivers.secrets:read" grafana | logger=migrator t=2025-06-16T14:56:23.821465344Z level=info msg="Migration successfully executed" id="Add scope to alert.notifications.receivers:read and alert.notifications.receivers.secrets:read" duration=460.676µs grafana | logger=migrator t=2025-06-16T14:56:23.826130069Z level=info msg="Executing migration" id="add metadata column to alert_rule table" grafana | logger=migrator t=2025-06-16T14:56:23.835812583Z level=info msg="Migration successfully executed" id="add metadata column to alert_rule table" duration=9.681884ms grafana | logger=migrator t=2025-06-16T14:56:23.839153313Z level=info msg="Executing migration" id="add metadata column to alert_rule_version table" grafana | logger=migrator t=2025-06-16T14:56:23.846219456Z level=info msg="Migration successfully executed" id="add metadata column to alert_rule_version table" duration=7.065633ms grafana | logger=migrator t=2025-06-16T14:56:23.849255812Z level=info msg="Executing migration" id="delete orphaned service account permissions" grafana | logger=migrator t=2025-06-16T14:56:23.849448834Z level=info msg="Migration successfully executed" id="delete orphaned service account permissions" duration=193.032µs grafana | logger=migrator t=2025-06-16T14:56:23.854504924Z level=info msg="Executing migration" id="adding action set permissions" grafana | logger=migrator t=2025-06-16T14:56:23.854871868Z level=info msg="Migration successfully executed" id="adding action set permissions" duration=366.694µs grafana | logger=migrator t=2025-06-16T14:56:23.859788946Z level=info msg="Executing migration" id="create user_external_session table" grafana | logger=migrator t=2025-06-16T14:56:23.861679998Z level=info msg="Migration successfully executed" id="create user_external_session table" duration=1.890342ms grafana | logger=migrator t=2025-06-16T14:56:23.867244294Z level=info msg="Executing migration" id="increase name_id column length to 1024" grafana | logger=migrator t=2025-06-16T14:56:23.867267554Z level=info msg="Migration successfully executed" id="increase name_id column length to 1024" duration=24.24µs grafana | logger=migrator t=2025-06-16T14:56:23.8702802Z level=info msg="Executing migration" id="increase session_id column length to 1024" grafana | logger=migrator t=2025-06-16T14:56:23.87029709Z level=info msg="Migration successfully executed" id="increase session_id column length to 1024" duration=17.9µs grafana | logger=migrator t=2025-06-16T14:56:23.874364678Z level=info msg="Executing migration" id="remove scope from alert.notifications.receivers:create" grafana | logger=migrator t=2025-06-16T14:56:23.874911055Z level=info msg="Migration successfully executed" id="remove scope from alert.notifications.receivers:create" duration=546.306µs grafana | logger=migrator t=2025-06-16T14:56:23.878127142Z level=info msg="Executing migration" id="add created_by column to alert_rule_version table" grafana | logger=migrator t=2025-06-16T14:56:23.890807572Z level=info msg="Migration successfully executed" id="add created_by column to alert_rule_version table" duration=12.68063ms grafana | logger=migrator t=2025-06-16T14:56:23.8948522Z level=info msg="Executing migration" id="add updated_by column to alert_rule table" grafana | logger=migrator t=2025-06-16T14:56:23.904543794Z level=info msg="Migration successfully executed" id="add updated_by column to alert_rule table" duration=9.684484ms grafana | logger=migrator t=2025-06-16T14:56:23.908516141Z level=info msg="Executing migration" id="add alert_rule_state table" grafana | logger=migrator t=2025-06-16T14:56:23.909562253Z level=info msg="Migration successfully executed" id="add alert_rule_state table" duration=1.045372ms grafana | logger=migrator t=2025-06-16T14:56:23.916830499Z level=info msg="Executing migration" id="add index to alert_rule_state on org_id and rule_uid columns" grafana | logger=migrator t=2025-06-16T14:56:23.919014625Z level=info msg="Migration successfully executed" id="add index to alert_rule_state on org_id and rule_uid columns" duration=2.187176ms grafana | logger=migrator t=2025-06-16T14:56:23.922968322Z level=info msg="Executing migration" id="add guid column to alert_rule table" grafana | logger=migrator t=2025-06-16T14:56:23.933294554Z level=info msg="Migration successfully executed" id="add guid column to alert_rule table" duration=10.325782ms grafana | logger=migrator t=2025-06-16T14:56:23.939176713Z level=info msg="Executing migration" id="add rule_guid column to alert_rule_version table" grafana | logger=migrator t=2025-06-16T14:56:23.948738886Z level=info msg="Migration successfully executed" id="add rule_guid column to alert_rule_version table" duration=9.560613ms grafana | logger=migrator t=2025-06-16T14:56:23.959322801Z level=info msg="Executing migration" id="cleanup alert_rule_version table" grafana | logger=migrator t=2025-06-16T14:56:23.959359991Z level=info msg="Rule version record limit is not set, fallback to 100" limit=0 grafana | logger=migrator t=2025-06-16T14:56:23.959720805Z level=info msg="Cleaning up table `alert_rule_version`" batchSize=50 batches=0 keepVersions=100 grafana | logger=migrator t=2025-06-16T14:56:23.959749416Z level=info msg="Migration successfully executed" id="cleanup alert_rule_version table" duration=427.355µs grafana | logger=migrator t=2025-06-16T14:56:23.96353247Z level=info msg="Executing migration" id="populate rule guid in alert rule table" grafana | logger=migrator t=2025-06-16T14:56:23.964492152Z level=info msg="Migration successfully executed" id="populate rule guid in alert rule table" duration=958.972µs grafana | logger=migrator t=2025-06-16T14:56:23.968464289Z level=info msg="Executing migration" id="drop index in alert_rule_version table on rule_org_id, rule_uid and version columns" grafana | logger=migrator t=2025-06-16T14:56:23.969705723Z level=info msg="Migration successfully executed" id="drop index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.241114ms grafana | logger=migrator t=2025-06-16T14:56:23.974669272Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid, rule_guid and version columns" grafana | logger=migrator t=2025-06-16T14:56:23.976271141Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid, rule_guid and version columns" duration=1.601849ms grafana | logger=migrator t=2025-06-16T14:56:23.979927664Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_guid and version columns" grafana | logger=migrator t=2025-06-16T14:56:23.98130258Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_guid and version columns" duration=1.374746ms grafana | logger=migrator t=2025-06-16T14:56:23.984919813Z level=info msg="Executing migration" id="add index in alert_rule table on guid columns" grafana | logger=migrator t=2025-06-16T14:56:23.98640575Z level=info msg="Migration successfully executed" id="add index in alert_rule table on guid columns" duration=1.492017ms grafana | logger=migrator t=2025-06-16T14:56:23.991821374Z level=info msg="Executing migration" id="add keep_firing_for column to alert_rule" grafana | logger=migrator t=2025-06-16T14:56:24.003724455Z level=info msg="Migration successfully executed" id="add keep_firing_for column to alert_rule" duration=11.896511ms grafana | logger=migrator t=2025-06-16T14:56:24.007713641Z level=info msg="Executing migration" id="add keep_firing_for column to alert_rule_version" grafana | logger=migrator t=2025-06-16T14:56:24.021038179Z level=info msg="Migration successfully executed" id="add keep_firing_for column to alert_rule_version" duration=13.338968ms grafana | logger=migrator t=2025-06-16T14:56:24.024699122Z level=info msg="Executing migration" id="add missing_series_evals_to_resolve column to alert_rule" grafana | logger=migrator t=2025-06-16T14:56:24.034383136Z level=info msg="Migration successfully executed" id="add missing_series_evals_to_resolve column to alert_rule" duration=9.683634ms grafana | logger=migrator t=2025-06-16T14:56:24.040368366Z level=info msg="Executing migration" id="add missing_series_evals_to_resolve column to alert_rule_version" grafana | logger=migrator t=2025-06-16T14:56:24.048635904Z level=info msg="Migration successfully executed" id="add missing_series_evals_to_resolve column to alert_rule_version" duration=8.266708ms grafana | logger=migrator t=2025-06-16T14:56:24.053054346Z level=info msg="Executing migration" id="remove the datasources:drilldown action" grafana | logger=migrator t=2025-06-16T14:56:24.053257488Z level=info msg="Removed 0 datasources:drilldown permissions" grafana | logger=migrator t=2025-06-16T14:56:24.053272689Z level=info msg="Migration successfully executed" id="remove the datasources:drilldown action" duration=219.142µs grafana | logger=migrator t=2025-06-16T14:56:24.05678896Z level=info msg="Executing migration" id="remove title in folder unique index" grafana | logger=migrator t=2025-06-16T14:56:24.057897993Z level=info msg="Migration successfully executed" id="remove title in folder unique index" duration=1.109053ms grafana | logger=migrator t=2025-06-16T14:56:24.062363336Z level=info msg="migrations completed" performed=654 skipped=0 duration=4.565053505s grafana | logger=migrator t=2025-06-16T14:56:24.063165135Z level=info msg="Unlocking database" grafana | logger=sqlstore t=2025-06-16T14:56:24.08226491Z level=info msg="Created default admin" user=admin grafana | logger=sqlstore t=2025-06-16T14:56:24.082485513Z level=info msg="Created default organization" grafana | logger=secrets t=2025-06-16T14:56:24.088485814Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 grafana | logger=plugin.angulardetectorsprovider.dynamic t=2025-06-16T14:56:24.195088811Z level=info msg="Restored cache from database" duration=481.696µs grafana | logger=resource-migrator t=2025-06-16T14:56:24.203216456Z level=info msg="Locking database" grafana | logger=resource-migrator t=2025-06-16T14:56:24.203231986Z level=info msg="Starting DB migrations" grafana | logger=resource-migrator t=2025-06-16T14:56:24.210648824Z level=info msg="Executing migration" id="create resource_migration_log table" grafana | logger=resource-migrator t=2025-06-16T14:56:24.211410103Z level=info msg="Migration successfully executed" id="create resource_migration_log table" duration=760.799µs grafana | logger=resource-migrator t=2025-06-16T14:56:24.220961416Z level=info msg="Executing migration" id="Initialize resource tables" grafana | logger=resource-migrator t=2025-06-16T14:56:24.220979486Z level=info msg="Migration successfully executed" id="Initialize resource tables" duration=18.98µs grafana | logger=resource-migrator t=2025-06-16T14:56:24.224585908Z level=info msg="Executing migration" id="drop table resource" grafana | logger=resource-migrator t=2025-06-16T14:56:24.224669519Z level=info msg="Migration successfully executed" id="drop table resource" duration=84.111µs grafana | logger=resource-migrator t=2025-06-16T14:56:24.227997349Z level=info msg="Executing migration" id="create table resource" grafana | logger=resource-migrator t=2025-06-16T14:56:24.229115432Z level=info msg="Migration successfully executed" id="create table resource" duration=1.117613ms grafana | logger=resource-migrator t=2025-06-16T14:56:24.238239519Z level=info msg="Executing migration" id="create table resource, index: 0" grafana | logger=resource-migrator t=2025-06-16T14:56:24.240275833Z level=info msg="Migration successfully executed" id="create table resource, index: 0" duration=2.035964ms grafana | logger=resource-migrator t=2025-06-16T14:56:24.243825145Z level=info msg="Executing migration" id="drop table resource_history" grafana | logger=resource-migrator t=2025-06-16T14:56:24.243902606Z level=info msg="Migration successfully executed" id="drop table resource_history" duration=77.791µs grafana | logger=resource-migrator t=2025-06-16T14:56:24.249775955Z level=info msg="Executing migration" id="create table resource_history" grafana | logger=resource-migrator t=2025-06-16T14:56:24.251558276Z level=info msg="Migration successfully executed" id="create table resource_history" duration=1.780191ms grafana | logger=resource-migrator t=2025-06-16T14:56:24.255851927Z level=info msg="Executing migration" id="create table resource_history, index: 0" grafana | logger=resource-migrator t=2025-06-16T14:56:24.25863274Z level=info msg="Migration successfully executed" id="create table resource_history, index: 0" duration=2.779283ms grafana | logger=resource-migrator t=2025-06-16T14:56:24.265579742Z level=info msg="Executing migration" id="create table resource_history, index: 1" grafana | logger=resource-migrator t=2025-06-16T14:56:24.266703885Z level=info msg="Migration successfully executed" id="create table resource_history, index: 1" duration=1.123343ms grafana | logger=resource-migrator t=2025-06-16T14:56:24.270096955Z level=info msg="Executing migration" id="drop table resource_version" grafana | logger=resource-migrator t=2025-06-16T14:56:24.270220856Z level=info msg="Migration successfully executed" id="drop table resource_version" duration=124.301µs grafana | logger=resource-migrator t=2025-06-16T14:56:24.276125896Z level=info msg="Executing migration" id="create table resource_version" grafana | logger=resource-migrator t=2025-06-16T14:56:24.277643064Z level=info msg="Migration successfully executed" id="create table resource_version" duration=1.516158ms grafana | logger=resource-migrator t=2025-06-16T14:56:24.285303244Z level=info msg="Executing migration" id="create table resource_version, index: 0" grafana | logger=resource-migrator t=2025-06-16T14:56:24.286462998Z level=info msg="Migration successfully executed" id="create table resource_version, index: 0" duration=1.159224ms grafana | logger=resource-migrator t=2025-06-16T14:56:24.289661395Z level=info msg="Executing migration" id="drop table resource_blob" grafana | logger=resource-migrator t=2025-06-16T14:56:24.289741126Z level=info msg="Migration successfully executed" id="drop table resource_blob" duration=80.131µs grafana | logger=resource-migrator t=2025-06-16T14:56:24.294026157Z level=info msg="Executing migration" id="create table resource_blob" grafana | logger=resource-migrator t=2025-06-16T14:56:24.295323312Z level=info msg="Migration successfully executed" id="create table resource_blob" duration=1.296285ms grafana | logger=resource-migrator t=2025-06-16T14:56:24.301427764Z level=info msg="Executing migration" id="create table resource_blob, index: 0" grafana | logger=resource-migrator t=2025-06-16T14:56:24.303391907Z level=info msg="Migration successfully executed" id="create table resource_blob, index: 0" duration=1.964253ms grafana | logger=resource-migrator t=2025-06-16T14:56:24.310791345Z level=info msg="Executing migration" id="create table resource_blob, index: 1" grafana | logger=resource-migrator t=2025-06-16T14:56:24.3120514Z level=info msg="Migration successfully executed" id="create table resource_blob, index: 1" duration=1.259765ms grafana | logger=resource-migrator t=2025-06-16T14:56:24.315544321Z level=info msg="Executing migration" id="Add column previous_resource_version in resource_history" grafana | logger=resource-migrator t=2025-06-16T14:56:24.325821112Z level=info msg="Migration successfully executed" id="Add column previous_resource_version in resource_history" duration=10.284461ms grafana | logger=resource-migrator t=2025-06-16T14:56:24.33334105Z level=info msg="Executing migration" id="Add column previous_resource_version in resource" grafana | logger=resource-migrator t=2025-06-16T14:56:24.346141281Z level=info msg="Migration successfully executed" id="Add column previous_resource_version in resource" duration=12.801031ms grafana | logger=resource-migrator t=2025-06-16T14:56:24.350332641Z level=info msg="Executing migration" id="Add index to resource_history for polling" grafana | logger=resource-migrator t=2025-06-16T14:56:24.351194171Z level=info msg="Migration successfully executed" id="Add index to resource_history for polling" duration=859.75µs grafana | logger=resource-migrator t=2025-06-16T14:56:24.354214727Z level=info msg="Executing migration" id="Add index to resource for loading" grafana | logger=resource-migrator t=2025-06-16T14:56:24.355031956Z level=info msg="Migration successfully executed" id="Add index to resource for loading" duration=816.799µs grafana | logger=resource-migrator t=2025-06-16T14:56:24.361862297Z level=info msg="Executing migration" id="Add column folder in resource_history" grafana | logger=resource-migrator t=2025-06-16T14:56:24.369632888Z level=info msg="Migration successfully executed" id="Add column folder in resource_history" duration=7.770781ms grafana | logger=resource-migrator t=2025-06-16T14:56:24.37397894Z level=info msg="Executing migration" id="Add column folder in resource" grafana | logger=resource-migrator t=2025-06-16T14:56:24.38336695Z level=info msg="Migration successfully executed" id="Add column folder in resource" duration=9.38655ms grafana | logger=resource-migrator t=2025-06-16T14:56:24.387772732Z level=info msg="Executing migration" id="Migrate DeletionMarkers to real Resource objects" grafana | logger=deletion-marker-migrator t=2025-06-16T14:56:24.387827453Z level=info msg="finding any deletion markers" grafana | logger=resource-migrator t=2025-06-16T14:56:24.388286788Z level=info msg="Migration successfully executed" id="Migrate DeletionMarkers to real Resource objects" duration=514.026µs grafana | logger=resource-migrator t=2025-06-16T14:56:24.396771728Z level=info msg="Executing migration" id="Add index to resource_history for get trash" grafana | logger=resource-migrator t=2025-06-16T14:56:24.398837823Z level=info msg="Migration successfully executed" id="Add index to resource_history for get trash" duration=2.065395ms grafana | logger=resource-migrator t=2025-06-16T14:56:24.402505116Z level=info msg="Executing migration" id="Add generation to resource history" grafana | logger=resource-migrator t=2025-06-16T14:56:24.415378308Z level=info msg="Migration successfully executed" id="Add generation to resource history" duration=12.873942ms grafana | logger=resource-migrator t=2025-06-16T14:56:24.421829924Z level=info msg="Executing migration" id="Add generation index to resource history" grafana | logger=resource-migrator t=2025-06-16T14:56:24.422749245Z level=info msg="Migration successfully executed" id="Add generation index to resource history" duration=918.791µs grafana | logger=resource-migrator t=2025-06-16T14:56:24.428829626Z level=info msg="migrations completed" performed=26 skipped=0 duration=218.229573ms grafana | logger=resource-migrator t=2025-06-16T14:56:24.429864579Z level=info msg="Unlocking database" grafana | t=2025-06-16T14:56:24.430295794Z level=info caller=logger.go:214 time=2025-06-16T14:56:24.430270023Z msg="Using channel notifier" logger=sql-resource-server grafana | logger=plugin.store t=2025-06-16T14:56:24.443689361Z level=info msg="Loading plugins..." grafana | logger=plugins.registration t=2025-06-16T14:56:24.476711701Z level=error msg="Could not register plugin" pluginId=table error="plugin table is already registered" grafana | logger=plugins.initialization t=2025-06-16T14:56:24.476734011Z level=error msg="Could not initialize plugin" pluginId=table error="plugin table is already registered" grafana | logger=plugin.store t=2025-06-16T14:56:24.476777872Z level=info msg="Plugins loaded" count=53 duration=33.089321ms grafana | logger=query_data t=2025-06-16T14:56:24.481250964Z level=info msg="Query Service initialization" grafana | logger=live.push_http t=2025-06-16T14:56:24.486510426Z level=info msg="Live Push Gateway initialization" grafana | logger=ngalert.notifier.alertmanager org=1 t=2025-06-16T14:56:24.501724256Z level=info msg="Applying new configuration to Alertmanager" configHash=d2c56faca6af2a5772ff4253222f7386 grafana | logger=ngalert t=2025-06-16T14:56:24.509813961Z level=info msg="Using simple database alert instance store" grafana | logger=ngalert.state.manager.persist t=2025-06-16T14:56:24.509871672Z level=info msg="Using sync state persister" grafana | logger=infra.usagestats.collector t=2025-06-16T14:56:24.511931386Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 grafana | logger=grafanaStorageLogger t=2025-06-16T14:56:24.51223507Z level=info msg="Storage starting" grafana | logger=plugin.backgroundinstaller t=2025-06-16T14:56:24.512497023Z level=info msg="Installing plugin" pluginId=grafana-metricsdrilldown-app version= grafana | logger=http.server t=2025-06-16T14:56:24.518794467Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= grafana | logger=ngalert.state.manager t=2025-06-16T14:56:24.518947209Z level=info msg="Warming state cache for startup" grafana | logger=sqlstore.transactions t=2025-06-16T14:56:24.52670071Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 grafana | logger=ngalert.multiorg.alertmanager t=2025-06-16T14:56:24.525689948Z level=info msg="Starting MultiOrg Alertmanager" grafana | logger=plugins.update.checker t=2025-06-16T14:56:24.613861978Z level=info msg="Update check succeeded" duration=101.332755ms grafana | logger=grafana.update.checker t=2025-06-16T14:56:24.630953579Z level=info msg="Update check succeeded" duration=118.512477ms grafana | logger=ngalert.state.manager t=2025-06-16T14:56:24.655725121Z level=info msg="State cache has been initialized" states=0 duration=136.777852ms grafana | logger=ngalert.scheduler t=2025-06-16T14:56:24.655770532Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=3 grafana | logger=ticker t=2025-06-16T14:56:24.655839003Z level=info msg=starting first_tick=2025-06-16T14:56:30Z grafana | logger=provisioning.datasources t=2025-06-16T14:56:24.662272999Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz grafana | logger=provisioning.alerting t=2025-06-16T14:56:24.683372017Z level=info msg="starting to provision alerting" grafana | logger=provisioning.alerting t=2025-06-16T14:56:24.683541509Z level=info msg="finished to provision alerting" grafana | logger=provisioning.dashboard t=2025-06-16T14:56:24.685171478Z level=info msg="starting to provision dashboards" grafana | logger=plugin.angulardetectorsprovider.dynamic t=2025-06-16T14:56:24.762207127Z level=info msg="Patterns update finished" duration=132.606704ms grafana | logger=plugin.installer t=2025-06-16T14:56:24.91427076Z level=info msg="Installing plugin" pluginId=grafana-metricsdrilldown-app version= grafana | logger=installer.fs t=2025-06-16T14:56:24.981090127Z level=info msg="Downloaded and extracted grafana-metricsdrilldown-app v1.0.1 zip successfully to /var/lib/grafana/plugins/grafana-metricsdrilldown-app" grafana | logger=plugins.registration t=2025-06-16T14:56:25.007739621Z level=info msg="Plugin registered" pluginId=grafana-metricsdrilldown-app grafana | logger=plugin.backgroundinstaller t=2025-06-16T14:56:25.007758792Z level=info msg="Plugin successfully installed" pluginId=grafana-metricsdrilldown-app version= duration=495.220898ms grafana | logger=plugin.backgroundinstaller t=2025-06-16T14:56:25.007779082Z level=info msg="Installing plugin" pluginId=grafana-lokiexplore-app version= grafana | logger=plugin.installer t=2025-06-16T14:56:25.321647068Z level=info msg="Installing plugin" pluginId=grafana-lokiexplore-app version= grafana | logger=grafana-apiserver t=2025-06-16T14:56:25.351396499Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-16T14:56:25.36934882Z level=info msg="Adding GroupVersion dashboard.grafana.app v1beta1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-16T14:56:25.370437573Z level=info msg="Adding GroupVersion dashboard.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-16T14:56:25.371439465Z level=info msg="Adding GroupVersion dashboard.grafana.app v2alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-16T14:56:25.372520978Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-16T14:56:25.374107386Z level=info msg="Adding GroupVersion folder.grafana.app v1beta1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-16T14:56:25.376506895Z level=info msg="Adding GroupVersion iam.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-16T14:56:25.379821513Z level=info msg="Adding GroupVersion notifications.alerting.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-16T14:56:25.380899146Z level=info msg="Adding GroupVersion userstorage.grafana.app v0alpha1 to ResourceManager" grafana | logger=app-registry t=2025-06-16T14:56:25.445655749Z level=info msg="app registry initialized" grafana | logger=installer.fs t=2025-06-16T14:56:25.477628105Z level=info msg="Downloaded and extracted grafana-lokiexplore-app v1.0.17 zip successfully to /var/lib/grafana/plugins/grafana-lokiexplore-app" grafana | logger=plugins.registration t=2025-06-16T14:56:25.510489553Z level=info msg="Plugin registered" pluginId=grafana-lokiexplore-app grafana | logger=plugin.backgroundinstaller t=2025-06-16T14:56:25.510519963Z level=info msg="Plugin successfully installed" pluginId=grafana-lokiexplore-app version= duration=502.734911ms grafana | logger=plugin.backgroundinstaller t=2025-06-16T14:56:25.510549423Z level=info msg="Installing plugin" pluginId=grafana-pyroscope-app version= grafana | logger=plugin.installer t=2025-06-16T14:56:25.716670091Z level=info msg="Installing plugin" pluginId=grafana-pyroscope-app version= grafana | logger=installer.fs t=2025-06-16T14:56:25.780045537Z level=info msg="Downloaded and extracted grafana-pyroscope-app v1.4.1 zip successfully to /var/lib/grafana/plugins/grafana-pyroscope-app" grafana | logger=plugins.registration t=2025-06-16T14:56:25.815176511Z level=info msg="Plugin registered" pluginId=grafana-pyroscope-app grafana | logger=plugin.backgroundinstaller t=2025-06-16T14:56:25.815213841Z level=info msg="Plugin successfully installed" pluginId=grafana-pyroscope-app version= duration=304.658438ms grafana | logger=plugin.backgroundinstaller t=2025-06-16T14:56:25.815240992Z level=info msg="Installing plugin" pluginId=grafana-exploretraces-app version= grafana | logger=provisioning.dashboard t=2025-06-16T14:56:25.940868821Z level=info msg="finished to provision dashboards" grafana | logger=plugin.installer t=2025-06-16T14:56:26.028867257Z level=info msg="Installing plugin" pluginId=grafana-exploretraces-app version= grafana | logger=installer.fs t=2025-06-16T14:56:26.093977523Z level=info msg="Downloaded and extracted grafana-exploretraces-app v1.0.0 zip successfully to /var/lib/grafana/plugins/grafana-exploretraces-app" grafana | logger=plugins.registration t=2025-06-16T14:56:26.110893822Z level=info msg="Plugin registered" pluginId=grafana-exploretraces-app grafana | logger=plugin.backgroundinstaller t=2025-06-16T14:56:26.110918053Z level=info msg="Plugin successfully installed" pluginId=grafana-exploretraces-app version= duration=295.672501ms grafana | logger=infra.usagestats t=2025-06-16T14:57:11.522443535Z level=info msg="Usage stats are ready to report" kafka | ===> User kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) kafka | ===> Configuring ... kafka | Running in Zookeeper mode... kafka | ===> Running preflight checks ... kafka | ===> Check if /var/lib/kafka/data is writable ... kafka | ===> Check if Zookeeper is healthy ... kafka | [2025-06-16 14:56:16,593] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 14:56:16,593] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 14:56:16,593] INFO Client environment:java.version=11.0.26 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 14:56:16,593] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 14:56:16,593] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 14:56:16,593] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/kafka-storage-api-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/kafka-server-common-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/kafka-clients-7.4.9-ccs.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/kafka-storage-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/kafka-raft-7.4.9-ccs.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.4.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.2-1.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.4.9-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.4.9.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/kafka-metadata-7.4.9-ccs.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/commons-io-2.16.0.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/zookeeper-3.8.4.jar:/usr/share/java/cp-base-new/kafka_2.13-7.4.9-ccs.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/utility-belt-7.4.9-53.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.4.9.jar:/usr/share/java/cp-base-new/jose4j-0.9.5.jar (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 14:56:16,593] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 14:56:16,594] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 14:56:16,594] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 14:56:16,594] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 14:56:16,594] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 14:56:16,594] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 14:56:16,594] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 14:56:16,594] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 14:56:16,594] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 14:56:16,594] INFO Client environment:os.memory.free=493MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 14:56:16,594] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 14:56:16,594] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 14:56:16,597] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@19dc67c2 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 14:56:16,600] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2025-06-16 14:56:16,604] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2025-06-16 14:56:16,611] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-16 14:56:16,629] INFO Opening socket connection to server zookeeper/172.17.0.2:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-16 14:56:16,629] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-16 14:56:16,637] INFO Socket connection established, initiating session, client: /172.17.0.6:54210, server: zookeeper/172.17.0.2:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-16 14:56:16,665] INFO Session establishment complete on server zookeeper/172.17.0.2:2181, session id = 0x10000027d360000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-16 14:56:16,784] INFO Session: 0x10000027d360000 closed (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 14:56:16,784] INFO EventThread shut down for session: 0x10000027d360000 (org.apache.zookeeper.ClientCnxn) kafka | Using log4j config /etc/kafka/log4j.properties kafka | ===> Launching ... kafka | ===> Launching kafka ... kafka | [2025-06-16 14:56:17,433] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) kafka | [2025-06-16 14:56:17,720] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2025-06-16 14:56:17,791] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) kafka | [2025-06-16 14:56:17,793] INFO starting (kafka.server.KafkaServer) kafka | [2025-06-16 14:56:17,793] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) kafka | [2025-06-16 14:56:17,805] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) kafka | [2025-06-16 14:56:17,809] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 14:56:17,809] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 14:56:17,809] INFO Client environment:java.version=11.0.26 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 14:56:17,809] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 14:56:17,809] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 14:56:17,809] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/kafka-storage-api-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/netty-common-4.1.115.Final.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-library-2.13.10.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/kafka-shell-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/reflections-0.9.12.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/kafka-clients-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-storage-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-raft-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-streams-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.0.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.2-1.jar:/usr/bin/../share/java/kafka/connect-runtime-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.115.Final.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/connect-api-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.115.Final.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.115.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.1.2.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.115.Final.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.115.Final.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/trogdor-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/commons-io-2.16.0.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-transforms-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/connect-mirror-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/kafka-tools-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.10.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.115.Final.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.4.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.115.Final.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.115.Final.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.57.v20241219.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 14:56:17,809] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 14:56:17,809] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 14:56:17,809] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 14:56:17,809] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 14:56:17,809] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 14:56:17,809] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 14:56:17,809] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 14:56:17,809] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 14:56:17,809] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 14:56:17,809] INFO Client environment:os.memory.free=1009MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 14:56:17,809] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 14:56:17,809] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 14:56:17,811] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@52851b44 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 14:56:17,814] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2025-06-16 14:56:17,823] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-16 14:56:17,826] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2025-06-16 14:56:17,829] INFO Opening socket connection to server zookeeper/172.17.0.2:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-16 14:56:17,837] INFO Socket connection established, initiating session, client: /172.17.0.6:54212, server: zookeeper/172.17.0.2:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-16 14:56:17,845] INFO Session establishment complete on server zookeeper/172.17.0.2:2181, session id = 0x10000027d360001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-16 14:56:17,849] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2025-06-16 14:56:18,125] INFO Cluster ID = RGdXTQKZTVW282RMUqCsFg (kafka.server.KafkaServer) kafka | [2025-06-16 14:56:18,130] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) kafka | [2025-06-16 14:56:18,181] INFO KafkaConfig values: kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 kafka | alter.config.policy.class.name = null kafka | alter.log.dirs.replication.quota.window.num = 11 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 kafka | authorizer.class.name = kafka | auto.create.topics.enable = true kafka | auto.include.jmx.reporter = true kafka | auto.leader.rebalance.enable = true kafka | background.threads = 10 kafka | broker.heartbeat.interval.ms = 2000 kafka | broker.id = 1 kafka | broker.id.generation.enable = true kafka | broker.rack = null kafka | broker.session.timeout.ms = 9000 kafka | client.quota.callback.class = null kafka | compression.type = producer kafka | connection.failed.authentication.delay.ms = 100 kafka | connections.max.idle.ms = 600000 kafka | connections.max.reauth.ms = 0 kafka | control.plane.listener.name = null kafka | controlled.shutdown.enable = true kafka | controlled.shutdown.max.retries = 3 kafka | controlled.shutdown.retry.backoff.ms = 5000 kafka | controller.listener.names = null kafka | controller.quorum.append.linger.ms = 25 kafka | controller.quorum.election.backoff.max.ms = 1000 kafka | controller.quorum.election.timeout.ms = 1000 kafka | controller.quorum.fetch.timeout.ms = 2000 kafka | controller.quorum.request.timeout.ms = 2000 kafka | controller.quorum.retry.backoff.ms = 20 kafka | controller.quorum.voters = [] kafka | controller.quota.window.num = 11 kafka | controller.quota.window.size.seconds = 1 kafka | controller.socket.timeout.ms = 30000 kafka | create.topic.policy.class.name = null kafka | default.replication.factor = 1 kafka | delegation.token.expiry.check.interval.ms = 3600000 kafka | delegation.token.expiry.time.ms = 86400000 kafka | delegation.token.master.key = null kafka | delegation.token.max.lifetime.ms = 604800000 kafka | delegation.token.secret.key = null kafka | delete.records.purgatory.purge.interval.requests = 1 kafka | delete.topic.enable = true kafka | early.start.listeners = null kafka | fetch.max.bytes = 57671680 kafka | fetch.purgatory.purge.interval.requests = 1000 kafka | group.initial.rebalance.delay.ms = 3000 kafka | group.max.session.timeout.ms = 1800000 kafka | group.max.size = 2147483647 kafka | group.min.session.timeout.ms = 6000 kafka | initial.broker.registration.timeout.ms = 60000 kafka | inter.broker.listener.name = PLAINTEXT kafka | inter.broker.protocol.version = 3.4-IV0 kafka | kafka.metrics.polling.interval.secs = 10 kafka | kafka.metrics.reporters = [] kafka | leader.imbalance.check.interval.seconds = 300 kafka | leader.imbalance.per.broker.percentage = 10 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 kafka | log.cleaner.backoff.ms = 15000 kafka | log.cleaner.dedupe.buffer.size = 134217728 kafka | log.cleaner.delete.retention.ms = 86400000 kafka | log.cleaner.enable = true kafka | log.cleaner.io.buffer.load.factor = 0.9 kafka | log.cleaner.io.buffer.size = 524288 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 kafka | log.cleaner.min.cleanable.ratio = 0.5 kafka | log.cleaner.min.compaction.lag.ms = 0 kafka | log.cleaner.threads = 1 kafka | log.cleanup.policy = [delete] kafka | log.dir = /tmp/kafka-logs kafka | log.dirs = /var/lib/kafka/data kafka | log.flush.interval.messages = 9223372036854775807 kafka | log.flush.interval.ms = null kafka | log.flush.offset.checkpoint.interval.ms = 60000 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 kafka | log.index.interval.bytes = 4096 kafka | log.index.size.max.bytes = 10485760 kafka | log.message.downconversion.enable = true kafka | log.message.format.version = 3.0-IV1 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 kafka | log.message.timestamp.type = CreateTime kafka | log.preallocate = false kafka | log.retention.bytes = -1 kafka | log.retention.check.interval.ms = 300000 kafka | log.retention.hours = 168 kafka | log.retention.minutes = null kafka | log.retention.ms = null kafka | log.roll.hours = 168 kafka | log.roll.jitter.hours = 0 kafka | log.roll.jitter.ms = null kafka | log.roll.ms = null kafka | log.segment.bytes = 1073741824 kafka | log.segment.delete.delay.ms = 60000 kafka | max.connection.creation.rate = 2147483647 kafka | max.connections = 2147483647 kafka | max.connections.per.ip = 2147483647 kafka | max.connections.per.ip.overrides = kafka | max.incremental.fetch.session.cache.slots = 1000 kafka | message.max.bytes = 1048588 kafka | metadata.log.dir = null kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 kafka | metadata.log.max.snapshot.interval.ms = 3600000 kafka | metadata.log.segment.bytes = 1073741824 kafka | metadata.log.segment.min.bytes = 8388608 kafka | metadata.log.segment.ms = 604800000 kafka | metadata.max.idle.interval.ms = 500 kafka | metadata.max.retention.bytes = 104857600 kafka | metadata.max.retention.ms = 604800000 kafka | metric.reporters = [] kafka | metrics.num.samples = 2 kafka | metrics.recording.level = INFO kafka | metrics.sample.window.ms = 30000 kafka | min.insync.replicas = 1 kafka | node.id = 1 kafka | num.io.threads = 8 kafka | num.network.threads = 3 kafka | num.partitions = 1 kafka | num.recovery.threads.per.data.dir = 1 kafka | num.replica.alter.log.dirs.threads = null kafka | num.replica.fetchers = 1 kafka | offset.metadata.max.bytes = 4096 kafka | offsets.commit.required.acks = -1 kafka | offsets.commit.timeout.ms = 5000 kafka | offsets.load.buffer.size = 5242880 kafka | offsets.retention.check.interval.ms = 600000 kafka | offsets.retention.minutes = 10080 kafka | offsets.topic.compression.codec = 0 kafka | offsets.topic.num.partitions = 50 kafka | offsets.topic.replication.factor = 1 kafka | offsets.topic.segment.bytes = 104857600 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding kafka | password.encoder.iterations = 4096 kafka | password.encoder.key.length = 128 kafka | password.encoder.keyfactory.algorithm = null kafka | password.encoder.old.secret = null kafka | password.encoder.secret = null kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder kafka | process.roles = [] kafka | producer.id.expiration.check.interval.ms = 600000 kafka | producer.id.expiration.ms = 86400000 kafka | producer.purgatory.purge.interval.requests = 1000 kafka | queued.max.request.bytes = -1 kafka | queued.max.requests = 500 kafka | quota.window.num = 11 kafka | quota.window.size.seconds = 1 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 kafka | remote.log.manager.task.interval.ms = 30000 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 kafka | remote.log.manager.task.retry.backoff.ms = 500 kafka | remote.log.manager.task.retry.jitter = 0.2 kafka | remote.log.manager.thread.pool.size = 10 kafka | remote.log.metadata.manager.class.name = null kafka | remote.log.metadata.manager.class.path = null kafka | remote.log.metadata.manager.impl.prefix = null kafka | remote.log.metadata.manager.listener.name = null kafka | remote.log.reader.max.pending.tasks = 100 kafka | remote.log.reader.threads = 10 kafka | remote.log.storage.manager.class.name = null kafka | remote.log.storage.manager.class.path = null kafka | remote.log.storage.manager.impl.prefix = null kafka | remote.log.storage.system.enable = false kafka | replica.fetch.backoff.ms = 1000 kafka | replica.fetch.max.bytes = 1048576 kafka | replica.fetch.min.bytes = 1 kafka | replica.fetch.response.max.bytes = 10485760 kafka | replica.fetch.wait.max.ms = 500 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 kafka | replica.lag.time.max.ms = 30000 kafka | replica.selector.class = null kafka | replica.socket.receive.buffer.bytes = 65536 kafka | replica.socket.timeout.ms = 30000 kafka | replication.quota.window.num = 11 kafka | replication.quota.window.size.seconds = 1 kafka | request.timeout.ms = 30000 kafka | reserved.broker.max.id = 1000 kafka | sasl.client.callback.handler.class = null kafka | sasl.enabled.mechanisms = [GSSAPI] kafka | sasl.jaas.config = null kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | sasl.kerberos.min.time.before.relogin = 60000 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] kafka | sasl.kerberos.service.name = null kafka | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | sasl.login.callback.handler.class = null kafka | sasl.login.class = null kafka | sasl.login.connect.timeout.ms = null kafka | sasl.login.read.timeout.ms = null kafka | sasl.login.refresh.buffer.seconds = 300 kafka | sasl.login.refresh.min.period.seconds = 60 kafka | sasl.login.refresh.window.factor = 0.8 kafka | sasl.login.refresh.window.jitter = 0.05 kafka | sasl.login.retry.backoff.max.ms = 10000 kafka | sasl.login.retry.backoff.ms = 100 kafka | sasl.mechanism.controller.protocol = GSSAPI kafka | sasl.mechanism.inter.broker.protocol = GSSAPI kafka | sasl.oauthbearer.clock.skew.seconds = 30 kafka | sasl.oauthbearer.expected.audience = null kafka | sasl.oauthbearer.expected.issuer = null kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | sasl.oauthbearer.jwks.endpoint.url = null kafka | sasl.oauthbearer.scope.claim.name = scope kafka | sasl.oauthbearer.sub.claim.name = sub kafka | sasl.oauthbearer.token.endpoint.url = null kafka | sasl.server.callback.handler.class = null kafka | sasl.server.max.receive.size = 524288 kafka | security.inter.broker.protocol = PLAINTEXT kafka | security.providers = null kafka | socket.connection.setup.timeout.max.ms = 30000 kafka | socket.connection.setup.timeout.ms = 10000 kafka | socket.listen.backlog.size = 50 kafka | socket.receive.buffer.bytes = 102400 kafka | socket.request.max.bytes = 104857600 kafka | socket.send.buffer.bytes = 102400 kafka | ssl.cipher.suites = [] kafka | ssl.client.auth = none kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | ssl.endpoint.identification.algorithm = https kafka | ssl.engine.factory.class = null kafka | ssl.key.password = null kafka | ssl.keymanager.algorithm = SunX509 kafka | ssl.keystore.certificate.chain = null kafka | ssl.keystore.key = null kafka | ssl.keystore.location = null kafka | ssl.keystore.password = null kafka | ssl.keystore.type = JKS kafka | ssl.principal.mapping.rules = DEFAULT kafka | ssl.protocol = TLSv1.3 kafka | ssl.provider = null kafka | ssl.secure.random.implementation = null kafka | ssl.trustmanager.algorithm = PKIX kafka | ssl.truststore.certificates = null kafka | ssl.truststore.location = null kafka | ssl.truststore.password = null kafka | ssl.truststore.type = JKS kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 kafka | transaction.max.timeout.ms = 900000 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 kafka | transaction.state.log.load.buffer.size = 5242880 kafka | transaction.state.log.min.isr = 2 kafka | transaction.state.log.num.partitions = 50 kafka | transaction.state.log.replication.factor = 3 kafka | transaction.state.log.segment.bytes = 104857600 kafka | transactional.id.expiration.ms = 604800000 kafka | unclean.leader.election.enable = false kafka | zookeeper.clientCnxnSocket = null kafka | zookeeper.connect = zookeeper:2181 kafka | zookeeper.connection.timeout.ms = null kafka | zookeeper.max.in.flight.requests = 10 kafka | zookeeper.metadata.migration.enable = false kafka | zookeeper.session.timeout.ms = 18000 kafka | zookeeper.set.acl = false kafka | zookeeper.ssl.cipher.suites = null kafka | zookeeper.ssl.client.enable = false kafka | zookeeper.ssl.crl.enable = false kafka | zookeeper.ssl.enabled.protocols = null kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS kafka | zookeeper.ssl.keystore.location = null kafka | zookeeper.ssl.keystore.password = null kafka | zookeeper.ssl.keystore.type = null kafka | zookeeper.ssl.ocsp.enable = false kafka | zookeeper.ssl.protocol = TLSv1.2 kafka | zookeeper.ssl.truststore.location = null kafka | zookeeper.ssl.truststore.password = null kafka | zookeeper.ssl.truststore.type = null kafka | (kafka.server.KafkaConfig) kafka | [2025-06-16 14:56:18,212] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-16 14:56:18,213] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-16 14:56:18,220] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-16 14:56:18,223] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-16 14:56:18,259] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) kafka | [2025-06-16 14:56:18,264] INFO Attempting recovery for all logs in /var/lib/kafka/data since no clean shutdown file was found (kafka.log.LogManager) kafka | [2025-06-16 14:56:18,274] INFO Loaded 0 logs in 15ms. (kafka.log.LogManager) kafka | [2025-06-16 14:56:18,274] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) kafka | [2025-06-16 14:56:18,276] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) kafka | [2025-06-16 14:56:18,288] INFO Starting the log cleaner (kafka.log.LogCleaner) kafka | [2025-06-16 14:56:18,348] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner) kafka | [2025-06-16 14:56:18,364] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) kafka | [2025-06-16 14:56:18,389] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) kafka | [2025-06-16 14:56:18,440] INFO [BrokerToControllerChannelManager broker=1 name=forwarding]: Starting (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-16 14:56:18,755] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2025-06-16 14:56:18,758] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) kafka | [2025-06-16 14:56:18,779] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) kafka | [2025-06-16 14:56:18,779] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2025-06-16 14:56:18,780] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) kafka | [2025-06-16 14:56:18,784] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) kafka | [2025-06-16 14:56:18,789] INFO [BrokerToControllerChannelManager broker=1 name=alterPartition]: Starting (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-16 14:56:18,804] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-16 14:56:18,806] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-16 14:56:18,808] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-16 14:56:18,811] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-16 14:56:18,821] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) kafka | [2025-06-16 14:56:18,848] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) kafka | [2025-06-16 14:56:18,868] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1750085778858,1750085778858,1,0,0,72057604728553473,258,0,27 kafka | (kafka.zk.KafkaZkClient) kafka | [2025-06-16 14:56:18,870] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) kafka | [2025-06-16 14:56:18,925] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) kafka | [2025-06-16 14:56:18,931] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-16 14:56:18,937] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-16 14:56:18,941] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-16 14:56:18,948] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) kafka | [2025-06-16 14:56:18,952] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 14:56:18,959] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) kafka | [2025-06-16 14:56:18,959] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 14:56:18,964] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) kafka | [2025-06-16 14:56:18,969] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) kafka | [2025-06-16 14:56:18,974] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2025-06-16 14:56:18,980] INFO [Transaction Marker Channel Manager 1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) kafka | [2025-06-16 14:56:18,980] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2025-06-16 14:56:19,005] INFO [MetadataCache brokerId=1] Updated cache from existing to latest FinalizedFeaturesAndEpoch(features=Map(), epoch=0). (kafka.server.metadata.ZkMetadataCache) kafka | [2025-06-16 14:56:19,006] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) kafka | [2025-06-16 14:56:19,012] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) kafka | [2025-06-16 14:56:19,016] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) kafka | [2025-06-16 14:56:19,017] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-16 14:56:19,020] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) kafka | [2025-06-16 14:56:19,039] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) kafka | [2025-06-16 14:56:19,039] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) kafka | [2025-06-16 14:56:19,049] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) kafka | [2025-06-16 14:56:19,052] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) kafka | [2025-06-16 14:56:19,055] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) kafka | [2025-06-16 14:56:19,064] INFO Kafka version: 7.4.9-ccs (org.apache.kafka.common.utils.AppInfoParser) kafka | [2025-06-16 14:56:19,064] INFO Kafka commitId: 07d888cfc0d14765fe5557324f1fdb4ada6698a5 (org.apache.kafka.common.utils.AppInfoParser) kafka | [2025-06-16 14:56:19,064] INFO Kafka startTimeMs: 1750085779058 (org.apache.kafka.common.utils.AppInfoParser) kafka | [2025-06-16 14:56:19,073] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) kafka | [2025-06-16 14:56:19,075] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) kafka | [2025-06-16 14:56:19,076] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) kafka | [2025-06-16 14:56:19,077] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) kafka | [2025-06-16 14:56:19,077] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) kafka | [2025-06-16 14:56:19,079] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) kafka | [2025-06-16 14:56:19,082] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) kafka | [2025-06-16 14:56:19,082] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) kafka | [2025-06-16 14:56:19,082] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) kafka | [2025-06-16 14:56:19,083] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) kafka | [2025-06-16 14:56:19,083] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) kafka | [2025-06-16 14:56:19,086] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) kafka | [2025-06-16 14:56:19,093] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-16 14:56:19,094] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-16 14:56:19,099] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-16 14:56:19,099] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-16 14:56:19,100] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) kafka | [2025-06-16 14:56:19,102] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) kafka | [2025-06-16 14:56:19,105] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) kafka | [2025-06-16 14:56:19,110] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) kafka | [2025-06-16 14:56:19,110] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) kafka | [2025-06-16 14:56:19,122] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) kafka | [2025-06-16 14:56:19,122] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) kafka | [2025-06-16 14:56:19,123] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) kafka | [2025-06-16 14:56:19,123] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) kafka | [2025-06-16 14:56:19,127] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) kafka | [2025-06-16 14:56:19,139] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) kafka | [2025-06-16 14:56:19,158] INFO [BrokerToControllerChannelManager broker=1 name=forwarding]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-16 14:56:19,172] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-16 14:56:19,196] INFO [BrokerToControllerChannelManager broker=1 name=alterPartition]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-16 14:56:24,141] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) kafka | [2025-06-16 14:56:24,141] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) kafka | [2025-06-16 14:56:51,395] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) kafka | [2025-06-16 14:56:51,398] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2025-06-16 14:56:51,403] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) kafka | [2025-06-16 14:56:51,404] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2025-06-16 14:56:51,444] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(a3MjtT2pQTmrm59pKd_tgw),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(2aDumyhlThe3S-CctXae-w),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) kafka | [2025-06-16 14:56:51,445] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) kafka | [2025-06-16 14:56:51,447] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 14:56:51,447] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 14:56:51,447] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 14:56:51,447] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 14:56:51,447] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 14:56:51,447] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 14:56:51,447] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 14:56:51,447] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 14:56:51,447] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 14:56:51,447] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 14:56:51,447] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 14:56:51,447] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 14:56:51,447] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 14:56:51,447] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 14:56:51,447] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 14:56:51,447] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 14:56:51,447] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 14:56:51,447] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 14:56:51,447] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 14:56:51,448] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 14:56:51,448] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 14:56:51,448] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 14:56:51,448] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 14:56:51,448] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 14:56:51,448] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 14:56:51,448] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 14:56:51,448] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 14:56:51,448] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 14:56:51,448] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 14:56:51,448] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 14:56:51,448] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 14:56:51,448] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 14:56:51,448] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 14:56:51,448] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 14:56:51,448] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 14:56:51,448] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 14:56:51,448] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 14:56:51,448] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 14:56:51,448] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 14:56:51,448] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 14:56:51,448] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 14:56:51,448] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 14:56:51,448] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 14:56:51,448] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 14:56:51,448] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 14:56:51,448] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 14:56:51,448] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 14:56:51,448] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 14:56:51,448] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 14:56:51,448] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 14:56:51,449] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 14:56:51,449] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-16 14:56:51,453] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 14:56:51,453] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 14:56:51,454] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 14:56:51,454] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 14:56:51,454] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 14:56:51,454] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 14:56:51,454] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 14:56:51,454] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 14:56:51,454] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 14:56:51,454] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 14:56:51,454] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 14:56:51,454] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 14:56:51,454] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 14:56:51,454] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 14:56:51,454] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 14:56:51,454] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 14:56:51,454] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 14:56:51,454] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 14:56:51,454] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 14:56:51,454] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 14:56:51,454] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 14:56:51,454] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 14:56:51,454] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 14:56:51,454] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 14:56:51,454] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 14:56:51,454] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 14:56:51,454] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 14:56:51,454] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 14:56:51,454] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 14:56:51,454] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 14:56:51,454] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 14:56:51,454] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 14:56:51,454] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 14:56:51,454] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 14:56:51,454] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 14:56:51,454] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 14:56:51,454] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 14:56:51,454] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 14:56:51,455] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 14:56:51,455] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 14:56:51,455] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 14:56:51,455] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 14:56:51,455] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 14:56:51,455] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 14:56:51,455] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 14:56:51,455] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 14:56:51,455] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 14:56:51,455] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 14:56:51,455] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 14:56:51,455] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 14:56:51,455] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 14:56:51,455] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-16 14:56:51,615] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 14:56:51,615] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 14:56:51,615] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 14:56:51,615] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 14:56:51,615] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 14:56:51,615] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 14:56:51,615] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 14:56:51,615] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 14:56:51,616] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 14:56:51,616] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 14:56:51,616] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 14:56:51,616] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 14:56:51,616] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 14:56:51,616] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 14:56:51,616] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 14:56:51,616] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 14:56:51,616] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 14:56:51,616] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 14:56:51,616] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 14:56:51,616] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 14:56:51,616] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 14:56:51,616] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 14:56:51,616] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 14:56:51,616] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 14:56:51,616] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 14:56:51,616] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 14:56:51,616] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 14:56:51,616] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 14:56:51,616] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 14:56:51,616] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 14:56:51,616] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 14:56:51,616] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 14:56:51,616] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 14:56:51,616] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 14:56:51,616] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 14:56:51,616] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 14:56:51,616] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 14:56:51,616] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 14:56:51,616] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 14:56:51,616] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 14:56:51,617] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 14:56:51,617] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 14:56:51,617] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 14:56:51,617] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 14:56:51,617] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 14:56:51,617] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 14:56:51,617] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 14:56:51,617] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 14:56:51,617] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 14:56:51,617] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 14:56:51,617] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 14:56:51,620] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) kafka | [2025-06-16 14:56:51,620] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) kafka | [2025-06-16 14:56:51,620] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) kafka | [2025-06-16 14:56:51,620] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) kafka | [2025-06-16 14:56:51,620] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) kafka | [2025-06-16 14:56:51,620] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) kafka | [2025-06-16 14:56:51,620] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) kafka | [2025-06-16 14:56:51,620] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) kafka | [2025-06-16 14:56:51,620] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) kafka | [2025-06-16 14:56:51,620] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) kafka | [2025-06-16 14:56:51,620] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) kafka | [2025-06-16 14:56:51,620] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) kafka | [2025-06-16 14:56:51,620] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) kafka | [2025-06-16 14:56:51,620] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) kafka | [2025-06-16 14:56:51,620] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) kafka | [2025-06-16 14:56:51,620] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) kafka | [2025-06-16 14:56:51,620] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) kafka | [2025-06-16 14:56:51,620] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) kafka | [2025-06-16 14:56:51,620] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) kafka | [2025-06-16 14:56:51,620] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) kafka | [2025-06-16 14:56:51,620] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) kafka | [2025-06-16 14:56:51,620] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) kafka | [2025-06-16 14:56:51,621] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) kafka | [2025-06-16 14:56:51,621] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) kafka | [2025-06-16 14:56:51,621] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) kafka | [2025-06-16 14:56:51,621] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) kafka | [2025-06-16 14:56:51,621] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) kafka | [2025-06-16 14:56:51,621] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) kafka | [2025-06-16 14:56:51,621] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) kafka | [2025-06-16 14:56:51,621] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) kafka | [2025-06-16 14:56:51,621] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) kafka | [2025-06-16 14:56:51,621] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) kafka | [2025-06-16 14:56:51,621] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) kafka | [2025-06-16 14:56:51,621] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) kafka | [2025-06-16 14:56:51,621] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) kafka | [2025-06-16 14:56:51,621] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) kafka | [2025-06-16 14:56:51,621] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) kafka | [2025-06-16 14:56:51,621] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) kafka | [2025-06-16 14:56:51,621] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) kafka | [2025-06-16 14:56:51,621] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) kafka | [2025-06-16 14:56:51,621] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) kafka | [2025-06-16 14:56:51,621] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) kafka | [2025-06-16 14:56:51,621] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) kafka | [2025-06-16 14:56:51,621] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) kafka | [2025-06-16 14:56:51,621] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) kafka | [2025-06-16 14:56:51,621] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) kafka | [2025-06-16 14:56:51,621] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) kafka | [2025-06-16 14:56:51,621] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) kafka | [2025-06-16 14:56:51,621] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) kafka | [2025-06-16 14:56:51,621] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) kafka | [2025-06-16 14:56:51,621] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) kafka | [2025-06-16 14:56:51,622] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) kafka | [2025-06-16 14:56:51,624] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) kafka | [2025-06-16 14:56:51,625] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 14:56:51,625] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 14:56:51,625] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 14:56:51,626] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 14:56:51,626] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 14:56:51,626] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 14:56:51,626] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 14:56:51,626] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 14:56:51,626] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 14:56:51,626] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 14:56:51,626] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 14:56:51,626] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 14:56:51,626] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 14:56:51,626] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 14:56:51,626] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 14:56:51,626] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 14:56:51,626] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 14:56:51,626] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 14:56:51,626] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 14:56:51,626] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 14:56:51,626] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 14:56:51,626] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 14:56:51,626] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 14:56:51,626] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 14:56:51,626] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 14:56:51,626] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 14:56:51,626] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 14:56:51,626] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 14:56:51,626] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 14:56:51,626] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 14:56:51,626] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 14:56:51,626] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 14:56:51,626] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 14:56:51,626] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 14:56:51,626] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 14:56:51,626] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 14:56:51,626] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 14:56:51,627] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 14:56:51,627] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 14:56:51,627] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 14:56:51,627] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 14:56:51,627] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 14:56:51,627] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 14:56:51,627] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 14:56:51,627] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 14:56:51,627] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 14:56:51,627] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 14:56:51,627] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 14:56:51,627] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 14:56:51,627] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 14:56:51,627] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 14:56:51,627] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-16 14:56:51,633] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) kafka | [2025-06-16 14:56:51,637] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 14:56:51,637] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 14:56:51,637] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 14:56:51,637] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 14:56:51,637] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 14:56:51,637] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 14:56:51,637] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 14:56:51,637] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 14:56:51,637] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 14:56:51,637] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 14:56:51,637] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 14:56:51,637] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 14:56:51,637] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 14:56:51,637] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 14:56:51,637] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 14:56:51,637] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 14:56:51,637] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 14:56:51,637] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 14:56:51,637] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 14:56:51,637] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 14:56:51,637] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 14:56:51,637] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 14:56:51,638] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 14:56:51,638] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 14:56:51,638] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 14:56:51,638] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 14:56:51,638] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 14:56:51,638] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 14:56:51,638] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 14:56:51,638] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 14:56:51,638] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 14:56:51,638] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 14:56:51,638] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 14:56:51,638] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 14:56:51,638] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 14:56:51,638] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 14:56:51,638] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 14:56:51,638] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 14:56:51,638] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 14:56:51,638] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 14:56:51,638] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 14:56:51,638] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 14:56:51,638] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 14:56:51,638] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 14:56:51,638] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 14:56:51,638] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 14:56:51,638] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 14:56:51,638] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 14:56:51,638] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 14:56:51,638] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 14:56:51,638] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 14:56:51,673] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2025-06-16 14:56:51,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2025-06-16 14:56:51,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2025-06-16 14:56:51,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2025-06-16 14:56:51,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2025-06-16 14:56:51,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2025-06-16 14:56:51,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2025-06-16 14:56:51,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2025-06-16 14:56:51,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2025-06-16 14:56:51,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2025-06-16 14:56:51,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2025-06-16 14:56:51,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2025-06-16 14:56:51,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2025-06-16 14:56:51,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2025-06-16 14:56:51,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2025-06-16 14:56:51,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2025-06-16 14:56:51,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2025-06-16 14:56:51,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2025-06-16 14:56:51,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2025-06-16 14:56:51,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2025-06-16 14:56:51,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2025-06-16 14:56:51,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2025-06-16 14:56:51,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2025-06-16 14:56:51,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2025-06-16 14:56:51,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2025-06-16 14:56:51,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2025-06-16 14:56:51,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2025-06-16 14:56:51,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2025-06-16 14:56:51,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2025-06-16 14:56:51,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2025-06-16 14:56:51,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2025-06-16 14:56:51,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2025-06-16 14:56:51,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2025-06-16 14:56:51,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2025-06-16 14:56:51,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2025-06-16 14:56:51,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2025-06-16 14:56:51,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2025-06-16 14:56:51,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2025-06-16 14:56:51,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2025-06-16 14:56:51,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2025-06-16 14:56:51,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2025-06-16 14:56:51,674] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2025-06-16 14:56:51,675] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2025-06-16 14:56:51,675] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2025-06-16 14:56:51,675] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2025-06-16 14:56:51,675] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2025-06-16 14:56:51,675] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2025-06-16 14:56:51,675] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2025-06-16 14:56:51,675] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2025-06-16 14:56:51,675] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2025-06-16 14:56:51,675] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2025-06-16 14:56:51,676] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) kafka | [2025-06-16 14:56:51,676] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) kafka | [2025-06-16 14:56:51,741] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 14:56:51,751] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 14:56:51,753] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:51,754] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:51,755] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(2aDumyhlThe3S-CctXae-w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 14:56:51,774] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 14:56:51,775] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 14:56:51,775] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:51,776] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:51,776] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(2aDumyhlThe3S-CctXae-w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 14:56:51,786] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 14:56:51,787] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 14:56:51,787] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:51,787] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:51,787] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(2aDumyhlThe3S-CctXae-w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 14:56:51,805] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 14:56:51,806] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 14:56:51,806] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:51,806] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:51,807] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(2aDumyhlThe3S-CctXae-w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 14:56:51,818] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 14:56:51,818] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 14:56:51,818] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:51,819] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:51,819] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(2aDumyhlThe3S-CctXae-w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 14:56:51,831] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 14:56:51,832] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 14:56:51,832] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:51,833] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:51,833] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(2aDumyhlThe3S-CctXae-w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 14:56:51,847] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 14:56:51,848] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 14:56:51,848] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:51,848] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:51,848] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(2aDumyhlThe3S-CctXae-w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 14:56:51,858] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 14:56:51,859] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 14:56:51,859] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:51,859] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:51,859] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(2aDumyhlThe3S-CctXae-w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 14:56:51,865] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 14:56:51,866] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 14:56:51,866] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:51,866] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:51,866] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(2aDumyhlThe3S-CctXae-w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 14:56:51,877] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 14:56:51,878] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 14:56:51,878] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:51,878] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:51,878] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(2aDumyhlThe3S-CctXae-w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 14:56:51,886] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 14:56:51,887] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 14:56:51,887] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:51,887] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:51,887] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(2aDumyhlThe3S-CctXae-w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 14:56:51,897] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 14:56:51,897] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 14:56:51,897] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:51,897] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:51,897] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(2aDumyhlThe3S-CctXae-w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 14:56:51,905] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 14:56:51,911] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 14:56:51,911] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:51,911] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:51,911] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(2aDumyhlThe3S-CctXae-w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 14:56:51,920] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 14:56:51,921] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 14:56:51,921] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:51,921] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:51,921] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(2aDumyhlThe3S-CctXae-w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 14:56:51,932] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 14:56:51,933] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 14:56:51,933] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:51,933] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:51,933] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(2aDumyhlThe3S-CctXae-w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 14:56:51,946] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 14:56:51,947] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 14:56:51,947] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:51,947] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:51,947] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(2aDumyhlThe3S-CctXae-w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 14:56:51,955] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 14:56:51,955] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 14:56:51,955] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:51,955] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:51,955] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(2aDumyhlThe3S-CctXae-w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 14:56:51,962] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 14:56:51,962] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 14:56:51,962] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:51,962] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:51,962] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(2aDumyhlThe3S-CctXae-w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 14:56:51,971] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 14:56:51,971] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 14:56:51,971] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:51,971] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:51,972] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(2aDumyhlThe3S-CctXae-w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 14:56:51,980] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 14:56:51,981] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 14:56:51,981] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:51,981] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:51,981] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(2aDumyhlThe3S-CctXae-w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 14:56:51,989] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 14:56:51,991] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 14:56:51,991] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:51,991] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:51,991] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(2aDumyhlThe3S-CctXae-w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 14:56:52,000] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 14:56:52,001] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 14:56:52,001] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:52,001] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:52,001] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(2aDumyhlThe3S-CctXae-w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 14:56:52,008] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 14:56:52,009] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 14:56:52,009] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:52,009] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:52,010] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(2aDumyhlThe3S-CctXae-w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 14:56:52,015] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 14:56:52,015] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 14:56:52,015] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:52,016] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:52,016] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(2aDumyhlThe3S-CctXae-w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 14:56:52,022] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 14:56:52,022] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 14:56:52,022] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:52,023] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:52,023] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(2aDumyhlThe3S-CctXae-w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 14:56:52,030] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 14:56:52,031] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 14:56:52,031] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:52,031] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:52,031] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(2aDumyhlThe3S-CctXae-w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 14:56:52,037] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 14:56:52,037] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 14:56:52,037] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:52,037] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:52,038] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(2aDumyhlThe3S-CctXae-w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 14:56:52,047] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 14:56:52,048] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 14:56:52,048] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:52,048] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:52,048] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(2aDumyhlThe3S-CctXae-w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 14:56:52,054] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 14:56:52,055] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 14:56:52,055] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:52,055] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:52,055] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(2aDumyhlThe3S-CctXae-w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 14:56:52,062] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 14:56:52,063] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 14:56:52,063] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:52,063] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:52,063] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(2aDumyhlThe3S-CctXae-w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 14:56:52,074] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 14:56:52,074] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 14:56:52,074] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:52,074] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:52,074] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(2aDumyhlThe3S-CctXae-w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 14:56:52,083] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 14:56:52,083] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 14:56:52,083] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:52,083] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:52,083] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(2aDumyhlThe3S-CctXae-w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 14:56:52,090] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 14:56:52,091] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 14:56:52,091] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:52,091] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:52,091] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(2aDumyhlThe3S-CctXae-w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 14:56:52,097] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 14:56:52,098] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 14:56:52,098] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:52,098] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:52,098] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(2aDumyhlThe3S-CctXae-w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 14:56:52,102] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 14:56:52,102] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) kafka | [2025-06-16 14:56:52,103] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:52,103] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:52,103] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(a3MjtT2pQTmrm59pKd_tgw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 14:56:52,107] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 14:56:52,108] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 14:56:52,108] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:52,108] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:52,108] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(2aDumyhlThe3S-CctXae-w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 14:56:52,115] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 14:56:52,115] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 14:56:52,115] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:52,115] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:52,115] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(2aDumyhlThe3S-CctXae-w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 14:56:52,121] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 14:56:52,122] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 14:56:52,122] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:52,122] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:52,122] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(2aDumyhlThe3S-CctXae-w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 14:56:52,131] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 14:56:52,131] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 14:56:52,131] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:52,131] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:52,131] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(2aDumyhlThe3S-CctXae-w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 14:56:52,141] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 14:56:52,142] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 14:56:52,142] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:52,142] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:52,142] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(2aDumyhlThe3S-CctXae-w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 14:56:52,149] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 14:56:52,150] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 14:56:52,150] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:52,150] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:52,150] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(2aDumyhlThe3S-CctXae-w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 14:56:52,159] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 14:56:52,160] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 14:56:52,160] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:52,160] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:52,160] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(2aDumyhlThe3S-CctXae-w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 14:56:52,170] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 14:56:52,171] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 14:56:52,172] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:52,172] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:52,172] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(2aDumyhlThe3S-CctXae-w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 14:56:52,177] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 14:56:52,177] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 14:56:52,177] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:52,177] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:52,177] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(2aDumyhlThe3S-CctXae-w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 14:56:52,191] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 14:56:52,191] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 14:56:52,191] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:52,191] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:52,191] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(2aDumyhlThe3S-CctXae-w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 14:56:52,198] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 14:56:52,198] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 14:56:52,198] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:52,198] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:52,198] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(2aDumyhlThe3S-CctXae-w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 14:56:52,209] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 14:56:52,210] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 14:56:52,210] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:52,210] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:52,210] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(2aDumyhlThe3S-CctXae-w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 14:56:52,217] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 14:56:52,217] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 14:56:52,217] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:52,217] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:52,217] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(2aDumyhlThe3S-CctXae-w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 14:56:52,225] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 14:56:52,225] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 14:56:52,226] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:52,226] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:52,226] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(2aDumyhlThe3S-CctXae-w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 14:56:52,232] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 14:56:52,233] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 14:56:52,233] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:52,233] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:52,233] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(2aDumyhlThe3S-CctXae-w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 14:56:52,241] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 14:56:52,242] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 14:56:52,242] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:52,242] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 14:56:52,242] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(2aDumyhlThe3S-CctXae-w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 14:56:52,247] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2025-06-16 14:56:52,247] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2025-06-16 14:56:52,247] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2025-06-16 14:56:52,247] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2025-06-16 14:56:52,247] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2025-06-16 14:56:52,247] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2025-06-16 14:56:52,247] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2025-06-16 14:56:52,247] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2025-06-16 14:56:52,247] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2025-06-16 14:56:52,247] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2025-06-16 14:56:52,247] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2025-06-16 14:56:52,247] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2025-06-16 14:56:52,247] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2025-06-16 14:56:52,247] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2025-06-16 14:56:52,247] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2025-06-16 14:56:52,247] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2025-06-16 14:56:52,247] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2025-06-16 14:56:52,247] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2025-06-16 14:56:52,247] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2025-06-16 14:56:52,247] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2025-06-16 14:56:52,247] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2025-06-16 14:56:52,247] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2025-06-16 14:56:52,247] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2025-06-16 14:56:52,247] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2025-06-16 14:56:52,247] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2025-06-16 14:56:52,247] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2025-06-16 14:56:52,247] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2025-06-16 14:56:52,247] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2025-06-16 14:56:52,247] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2025-06-16 14:56:52,247] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2025-06-16 14:56:52,247] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2025-06-16 14:56:52,247] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2025-06-16 14:56:52,247] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2025-06-16 14:56:52,247] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2025-06-16 14:56:52,247] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2025-06-16 14:56:52,247] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2025-06-16 14:56:52,247] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2025-06-16 14:56:52,247] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2025-06-16 14:56:52,247] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2025-06-16 14:56:52,247] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2025-06-16 14:56:52,247] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2025-06-16 14:56:52,247] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2025-06-16 14:56:52,247] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2025-06-16 14:56:52,247] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2025-06-16 14:56:52,247] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2025-06-16 14:56:52,247] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2025-06-16 14:56:52,247] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2025-06-16 14:56:52,247] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2025-06-16 14:56:52,247] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2025-06-16 14:56:52,248] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2025-06-16 14:56:52,248] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2025-06-16 14:56:52,253] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 14:56:52,254] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,256] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 14:56:52,256] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,256] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 14:56:52,256] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,256] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 14:56:52,256] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,256] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 14:56:52,256] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,256] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 14:56:52,257] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,257] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 14:56:52,257] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,257] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 14:56:52,257] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,257] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 14:56:52,257] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,257] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 14:56:52,257] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,257] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 14:56:52,257] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,257] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 14:56:52,257] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,257] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 14:56:52,257] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,257] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 14:56:52,257] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,257] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 14:56:52,257] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,257] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 14:56:52,257] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,257] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 14:56:52,257] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,257] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 14:56:52,257] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,257] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 14:56:52,257] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,257] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 14:56:52,257] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,257] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 14:56:52,257] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,257] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 14:56:52,257] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,257] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 14:56:52,257] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,257] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 14:56:52,257] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,257] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 14:56:52,257] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,257] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 14:56:52,257] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,257] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 14:56:52,257] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,257] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 14:56:52,257] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,257] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 14:56:52,257] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,257] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 14:56:52,257] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,257] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 14:56:52,257] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,257] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 14:56:52,257] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,257] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 14:56:52,257] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,257] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 14:56:52,257] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,257] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 14:56:52,257] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,257] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 14:56:52,257] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,258] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 14:56:52,258] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,258] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 14:56:52,258] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,258] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 14:56:52,258] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,258] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 14:56:52,258] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,258] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 14:56:52,258] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,258] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 14:56:52,258] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,258] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 14:56:52,258] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,258] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 14:56:52,258] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,258] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 14:56:52,258] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,258] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 14:56:52,258] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,258] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 14:56:52,258] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,258] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 14:56:52,258] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,258] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 14:56:52,258] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,258] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 14:56:52,258] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,262] INFO [Broker id=1] Finished LeaderAndIsr request in 629ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) kafka | [2025-06-16 14:56:52,261] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 6 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,264] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,265] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 9 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,265] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,265] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,265] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,265] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,265] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,265] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,265] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,266] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 9 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,266] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,266] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,266] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,266] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,266] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,266] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,266] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,266] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,266] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,266] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,267] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,267] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,267] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,267] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,267] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,267] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,267] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,267] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,267] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,267] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,267] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,268] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 11 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,268] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,268] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,268] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,268] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=2aDumyhlThe3S-CctXae-w, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=a3MjtT2pQTmrm59pKd_tgw, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-16 14:56:52,268] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,280] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 22 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,281] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 23 milliseconds for epoch 0, of which 23 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,281] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 23 milliseconds for epoch 0, of which 23 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,281] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 23 milliseconds for epoch 0, of which 23 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,281] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 23 milliseconds for epoch 0, of which 23 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,281] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 23 milliseconds for epoch 0, of which 23 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,281] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 23 milliseconds for epoch 0, of which 23 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,281] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 23 milliseconds for epoch 0, of which 23 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,281] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 23 milliseconds for epoch 0, of which 23 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,281] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 23 milliseconds for epoch 0, of which 23 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,282] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 24 milliseconds for epoch 0, of which 23 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,282] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 24 milliseconds for epoch 0, of which 24 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,282] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 24 milliseconds for epoch 0, of which 24 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 14:56:52,283] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 14:56:52,284] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 14:56:52,284] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 14:56:52,284] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 14:56:52,284] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 14:56:52,284] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 14:56:52,284] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 14:56:52,284] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 14:56:52,284] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 14:56:52,284] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 14:56:52,284] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 14:56:52,284] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 14:56:52,284] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 14:56:52,284] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 14:56:52,284] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 14:56:52,284] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 14:56:52,284] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 14:56:52,284] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 14:56:52,284] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 14:56:52,284] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 14:56:52,284] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 14:56:52,284] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 14:56:52,284] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 14:56:52,284] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 14:56:52,284] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 14:56:52,284] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 14:56:52,284] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 14:56:52,284] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 14:56:52,284] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 14:56:52,284] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 14:56:52,284] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 14:56:52,285] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 14:56:52,285] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 14:56:52,285] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 14:56:52,285] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 14:56:52,285] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 14:56:52,285] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 14:56:52,285] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 14:56:52,285] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 14:56:52,285] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 14:56:52,285] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 14:56:52,285] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 14:56:52,285] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 14:56:52,285] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 14:56:52,285] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 14:56:52,285] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 14:56:52,285] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 14:56:52,285] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 14:56:52,285] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 14:56:52,285] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 14:56:52,285] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 14:56:52,285] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 14:56:52,286] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-16 14:56:52,942] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-4256b2a0-7699-4466-9093-217f2ec55f1a and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 14:56:52,962] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-4256b2a0-7699-4466-9093-217f2ec55f1a with group instance id None; client reason: need to re-join with the given member-id: consumer-policy-pap-4-4256b2a0-7699-4466-9093-217f2ec55f1a) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 14:56:52,985] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 2dab126f-9d9c-40df-8665-fde68f19e9e7 in Empty state. Created a new member id consumer-2dab126f-9d9c-40df-8665-fde68f19e9e7-2-48e10886-1d25-4003-a1c4-9cd3e9c58e80 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 14:56:52,988] INFO [GroupCoordinator 1]: Preparing to rebalance group 2dab126f-9d9c-40df-8665-fde68f19e9e7 in state PreparingRebalance with old generation 0 (__consumer_offsets-43) (reason: Adding new member consumer-2dab126f-9d9c-40df-8665-fde68f19e9e7-2-48e10886-1d25-4003-a1c4-9cd3e9c58e80 with group instance id None; client reason: need to re-join with the given member-id: consumer-2dab126f-9d9c-40df-8665-fde68f19e9e7-2-48e10886-1d25-4003-a1c4-9cd3e9c58e80) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 14:56:53,222] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group f9da0d51-d8fe-4efe-818b-a9b7c652cdb2 in Empty state. Created a new member id consumer-f9da0d51-d8fe-4efe-818b-a9b7c652cdb2-3-a7782bf0-190c-47e0-b567-ada45a009874 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 14:56:53,225] INFO [GroupCoordinator 1]: Preparing to rebalance group f9da0d51-d8fe-4efe-818b-a9b7c652cdb2 in state PreparingRebalance with old generation 0 (__consumer_offsets-20) (reason: Adding new member consumer-f9da0d51-d8fe-4efe-818b-a9b7c652cdb2-3-a7782bf0-190c-47e0-b567-ada45a009874 with group instance id None; client reason: need to re-join with the given member-id: consumer-f9da0d51-d8fe-4efe-818b-a9b7c652cdb2-3-a7782bf0-190c-47e0-b567-ada45a009874) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 14:56:55,974] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 14:56:55,988] INFO [GroupCoordinator 1]: Stabilized group 2dab126f-9d9c-40df-8665-fde68f19e9e7 generation 1 (__consumer_offsets-43) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 14:56:56,008] INFO [GroupCoordinator 1]: Assignment received from leader consumer-2dab126f-9d9c-40df-8665-fde68f19e9e7-2-48e10886-1d25-4003-a1c4-9cd3e9c58e80 for group 2dab126f-9d9c-40df-8665-fde68f19e9e7 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 14:56:56,009] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-4256b2a0-7699-4466-9093-217f2ec55f1a for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 14:56:56,225] INFO [GroupCoordinator 1]: Stabilized group f9da0d51-d8fe-4efe-818b-a9b7c652cdb2 generation 1 (__consumer_offsets-20) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 14:56:56,231] INFO [GroupCoordinator 1]: Assignment received from leader consumer-f9da0d51-d8fe-4efe-818b-a9b7c652cdb2-3-a7782bf0-190c-47e0-b567-ada45a009874 for group f9da0d51-d8fe-4efe-818b-a9b7c652cdb2 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) policy-apex-pdp | Waiting for kafka port 9092... policy-apex-pdp | kafka (172.17.0.6:9092) open policy-apex-pdp | Waiting for pap port 6969... policy-apex-pdp | pap (172.17.0.9:6969) open policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' policy-apex-pdp | [2025-06-16T14:56:52.118+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] policy-apex-pdp | [2025-06-16T14:56:52.310+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-apex-pdp | allow.auto.create.topics = true policy-apex-pdp | auto.commit.interval.ms = 5000 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | auto.offset.reset = latest policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | check.crcs = true policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = consumer-2dab126f-9d9c-40df-8665-fde68f19e9e7-1 policy-apex-pdp | client.rack = policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | default.api.timeout.ms = 60000 policy-apex-pdp | enable.auto.commit = true policy-apex-pdp | enable.metrics.push = true policy-apex-pdp | exclude.internal.topics = true policy-apex-pdp | fetch.max.bytes = 52428800 policy-apex-pdp | fetch.max.wait.ms = 500 policy-apex-pdp | fetch.min.bytes = 1 policy-apex-pdp | group.id = 2dab126f-9d9c-40df-8665-fde68f19e9e7 policy-apex-pdp | group.instance.id = null policy-apex-pdp | group.protocol = classic policy-apex-pdp | group.remote.assignor = null policy-apex-pdp | heartbeat.interval.ms = 3000 policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | internal.leave.group.on.close = true policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-apex-pdp | isolation.level = read_uncommitted policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | max.partition.fetch.bytes = 1048576 policy-apex-pdp | max.poll.interval.ms = 300000 policy-apex-pdp | max.poll.records = 500 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metadata.recovery.strategy = none policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-apex-pdp | receive.buffer.bytes = 65536 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retry.backoff.max.ms = 1000 policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.callback.handler.class = null policy-apex-pdp | sasl.login.class = null policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-apex-pdp | sasl.login.read.timeout.ms = null policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | sasl.mechanism = GSSAPI policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | sasl.oauthbearer.header.urlencode = false policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | security.protocol = PLAINTEXT policy-apex-pdp | security.providers = null policy-apex-pdp | send.buffer.bytes = 131072 policy-apex-pdp | session.timeout.ms = 45000 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | ssl.cipher.suites = null policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-apex-pdp | ssl.engine.factory.class = null policy-apex-pdp | ssl.key.password = null policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | ssl.keystore.certificate.chain = null policy-apex-pdp | ssl.keystore.key = null policy-apex-pdp | ssl.keystore.location = null policy-apex-pdp | ssl.keystore.password = null policy-apex-pdp | ssl.keystore.type = JKS policy-apex-pdp | ssl.protocol = TLSv1.3 policy-apex-pdp | ssl.provider = null policy-apex-pdp | ssl.secure.random.implementation = null policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | ssl.truststore.certificates = null policy-apex-pdp | ssl.truststore.location = null policy-apex-pdp | ssl.truststore.password = null policy-apex-pdp | ssl.truststore.type = JKS policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | policy-apex-pdp | [2025-06-16T14:56:52.352+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-apex-pdp | [2025-06-16T14:56:52.489+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-apex-pdp | [2025-06-16T14:56:52.490+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-apex-pdp | [2025-06-16T14:56:52.490+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750085812488 policy-apex-pdp | [2025-06-16T14:56:52.492+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-2dab126f-9d9c-40df-8665-fde68f19e9e7-1, groupId=2dab126f-9d9c-40df-8665-fde68f19e9e7] Subscribed to topic(s): policy-pdp-pap policy-apex-pdp | [2025-06-16T14:56:52.511+00:00|INFO|ServiceManager|main] service manager starting policy-apex-pdp | [2025-06-16T14:56:52.512+00:00|INFO|ServiceManager|main] service manager starting topics policy-apex-pdp | [2025-06-16T14:56:52.513+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=2dab126f-9d9c-40df-8665-fde68f19e9e7, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting policy-apex-pdp | [2025-06-16T14:56:52.530+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-apex-pdp | allow.auto.create.topics = true policy-apex-pdp | auto.commit.interval.ms = 5000 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | auto.offset.reset = latest policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | check.crcs = true policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = consumer-2dab126f-9d9c-40df-8665-fde68f19e9e7-2 policy-apex-pdp | client.rack = policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | default.api.timeout.ms = 60000 policy-apex-pdp | enable.auto.commit = true policy-apex-pdp | enable.metrics.push = true policy-apex-pdp | exclude.internal.topics = true policy-apex-pdp | fetch.max.bytes = 52428800 policy-apex-pdp | fetch.max.wait.ms = 500 policy-apex-pdp | fetch.min.bytes = 1 policy-apex-pdp | group.id = 2dab126f-9d9c-40df-8665-fde68f19e9e7 policy-apex-pdp | group.instance.id = null policy-apex-pdp | group.protocol = classic policy-apex-pdp | group.remote.assignor = null policy-apex-pdp | heartbeat.interval.ms = 3000 policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | internal.leave.group.on.close = true policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-apex-pdp | isolation.level = read_uncommitted policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | max.partition.fetch.bytes = 1048576 policy-apex-pdp | max.poll.interval.ms = 300000 policy-apex-pdp | max.poll.records = 500 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metadata.recovery.strategy = none policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-apex-pdp | receive.buffer.bytes = 65536 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retry.backoff.max.ms = 1000 policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.callback.handler.class = null policy-apex-pdp | sasl.login.class = null policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-apex-pdp | sasl.login.read.timeout.ms = null policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | sasl.mechanism = GSSAPI policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | sasl.oauthbearer.header.urlencode = false policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | security.protocol = PLAINTEXT policy-apex-pdp | security.providers = null policy-apex-pdp | send.buffer.bytes = 131072 policy-apex-pdp | session.timeout.ms = 45000 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | ssl.cipher.suites = null policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-apex-pdp | ssl.engine.factory.class = null policy-apex-pdp | ssl.key.password = null policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | ssl.keystore.certificate.chain = null policy-apex-pdp | ssl.keystore.key = null policy-apex-pdp | ssl.keystore.location = null policy-apex-pdp | ssl.keystore.password = null policy-apex-pdp | ssl.keystore.type = JKS policy-apex-pdp | ssl.protocol = TLSv1.3 policy-apex-pdp | ssl.provider = null policy-apex-pdp | ssl.secure.random.implementation = null policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | ssl.truststore.certificates = null policy-apex-pdp | ssl.truststore.location = null policy-apex-pdp | ssl.truststore.password = null policy-apex-pdp | ssl.truststore.type = JKS policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | policy-apex-pdp | [2025-06-16T14:56:52.531+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-apex-pdp | [2025-06-16T14:56:52.544+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-apex-pdp | [2025-06-16T14:56:52.544+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-apex-pdp | [2025-06-16T14:56:52.544+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750085812544 policy-apex-pdp | [2025-06-16T14:56:52.544+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-2dab126f-9d9c-40df-8665-fde68f19e9e7-2, groupId=2dab126f-9d9c-40df-8665-fde68f19e9e7] Subscribed to topic(s): policy-pdp-pap policy-apex-pdp | [2025-06-16T14:56:52.545+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=9fd49440-91c8-41f1-b258-5133af767a10, alive=false, publisher=null]]: starting policy-apex-pdp | [2025-06-16T14:56:52.554+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-apex-pdp | acks = -1 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | batch.size = 16384 policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | buffer.memory = 33554432 policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = producer-1 policy-apex-pdp | compression.gzip.level = -1 policy-apex-pdp | compression.lz4.level = 9 policy-apex-pdp | compression.type = none policy-apex-pdp | compression.zstd.level = 3 policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | delivery.timeout.ms = 120000 policy-apex-pdp | enable.idempotence = true policy-apex-pdp | enable.metrics.push = true policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-apex-pdp | linger.ms = 0 policy-apex-pdp | max.block.ms = 60000 policy-apex-pdp | max.in.flight.requests.per.connection = 5 policy-apex-pdp | max.request.size = 1048576 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metadata.max.idle.ms = 300000 policy-apex-pdp | metadata.recovery.strategy = none policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partitioner.adaptive.partitioning.enable = true policy-apex-pdp | partitioner.availability.timeout.ms = 0 policy-apex-pdp | partitioner.class = null policy-apex-pdp | partitioner.ignore.keys = false policy-apex-pdp | receive.buffer.bytes = 32768 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retries = 2147483647 policy-apex-pdp | retry.backoff.max.ms = 1000 policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.callback.handler.class = null policy-apex-pdp | sasl.login.class = null policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-apex-pdp | sasl.login.read.timeout.ms = null policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | sasl.mechanism = GSSAPI policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | sasl.oauthbearer.header.urlencode = false policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | security.protocol = PLAINTEXT policy-apex-pdp | security.providers = null policy-apex-pdp | send.buffer.bytes = 131072 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | ssl.cipher.suites = null policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-apex-pdp | ssl.engine.factory.class = null policy-apex-pdp | ssl.key.password = null policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | ssl.keystore.certificate.chain = null policy-apex-pdp | ssl.keystore.key = null policy-apex-pdp | ssl.keystore.location = null policy-apex-pdp | ssl.keystore.password = null policy-apex-pdp | ssl.keystore.type = JKS policy-apex-pdp | ssl.protocol = TLSv1.3 policy-apex-pdp | ssl.provider = null policy-apex-pdp | ssl.secure.random.implementation = null policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | ssl.truststore.certificates = null policy-apex-pdp | ssl.truststore.location = null policy-apex-pdp | ssl.truststore.password = null policy-apex-pdp | ssl.truststore.type = JKS policy-apex-pdp | transaction.timeout.ms = 60000 policy-apex-pdp | transactional.id = null policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-apex-pdp | policy-apex-pdp | [2025-06-16T14:56:52.555+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-apex-pdp | [2025-06-16T14:56:52.563+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-apex-pdp | [2025-06-16T14:56:52.579+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-apex-pdp | [2025-06-16T14:56:52.579+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-apex-pdp | [2025-06-16T14:56:52.580+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750085812579 policy-apex-pdp | [2025-06-16T14:56:52.580+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=9fd49440-91c8-41f1-b258-5133af767a10, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-apex-pdp | [2025-06-16T14:56:52.580+00:00|INFO|ServiceManager|main] service manager starting set alive policy-apex-pdp | [2025-06-16T14:56:52.580+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object policy-apex-pdp | [2025-06-16T14:56:52.581+00:00|INFO|ServiceManager|main] service manager starting topic sinks policy-apex-pdp | [2025-06-16T14:56:52.582+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher policy-apex-pdp | [2025-06-16T14:56:52.591+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener policy-apex-pdp | [2025-06-16T14:56:52.591+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher policy-apex-pdp | [2025-06-16T14:56:52.591+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher policy-apex-pdp | [2025-06-16T14:56:52.592+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=2dab126f-9d9c-40df-8665-fde68f19e9e7, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@4c168660 policy-apex-pdp | [2025-06-16T14:56:52.592+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=2dab126f-9d9c-40df-8665-fde68f19e9e7, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted policy-apex-pdp | [2025-06-16T14:56:52.592+00:00|INFO|ServiceManager|main] service manager starting Create REST server policy-apex-pdp | [2025-06-16T14:56:52.605+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: policy-apex-pdp | [] policy-apex-pdp | [2025-06-16T14:56:52.607+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"6cebbcec-d408-4054-ba08-6327227f6d72","timestampMs":1750085812592,"name":"apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7","pdpGroup":"defaultGroup"} policy-apex-pdp | [2025-06-16T14:56:52.819+00:00|INFO|ServiceManager|main] service manager starting Rest Server policy-apex-pdp | [2025-06-16T14:56:52.819+00:00|INFO|ServiceManager|main] service manager starting policy-apex-pdp | [2025-06-16T14:56:52.819+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters policy-apex-pdp | [2025-06-16T14:56:52.819+00:00|INFO|JettyServletServer|main] JettyJerseyServer [JerseyServlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-4e7095ac==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@359c841e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64d7b720==org.glassfish.jersey.servlet.ServletContainer@2f1718b2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=oejs.Server@109f5dd8{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@415e0bcb{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@194152cf{STOPPED}}, connector=RestServerParameters@49d98dc5{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-4e7095ac==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@359c841e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64d7b720==org.glassfish.jersey.servlet.ServletContainer@2f1718b2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: STARTING policy-apex-pdp | [2025-06-16T14:56:52.829+00:00|INFO|ServiceManager|main] service manager started policy-apex-pdp | [2025-06-16T14:56:52.829+00:00|INFO|ServiceManager|main] service manager started policy-apex-pdp | [2025-06-16T14:56:52.829+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. policy-apex-pdp | [2025-06-16T14:56:52.829+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [JerseyServlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-4e7095ac==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@359c841e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64d7b720==org.glassfish.jersey.servlet.ServletContainer@2f1718b2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=oejs.Server@109f5dd8{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@415e0bcb{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@194152cf{STOPPED}}, connector=RestServerParameters@49d98dc5{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-4e7095ac==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@359c841e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64d7b720==org.glassfish.jersey.servlet.ServletContainer@2f1718b2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: RUN policy-apex-pdp | [2025-06-16T14:56:52.949+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: RGdXTQKZTVW282RMUqCsFg policy-apex-pdp | [2025-06-16T14:56:52.949+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2dab126f-9d9c-40df-8665-fde68f19e9e7-2, groupId=2dab126f-9d9c-40df-8665-fde68f19e9e7] Cluster ID: RGdXTQKZTVW282RMUqCsFg policy-apex-pdp | [2025-06-16T14:56:52.951+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2dab126f-9d9c-40df-8665-fde68f19e9e7-2, groupId=2dab126f-9d9c-40df-8665-fde68f19e9e7] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-apex-pdp | [2025-06-16T14:56:52.965+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 policy-apex-pdp | [2025-06-16T14:56:52.968+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2dab126f-9d9c-40df-8665-fde68f19e9e7-2, groupId=2dab126f-9d9c-40df-8665-fde68f19e9e7] (Re-)joining group policy-apex-pdp | [2025-06-16T14:56:52.986+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2dab126f-9d9c-40df-8665-fde68f19e9e7-2, groupId=2dab126f-9d9c-40df-8665-fde68f19e9e7] Request joining group due to: need to re-join with the given member-id: consumer-2dab126f-9d9c-40df-8665-fde68f19e9e7-2-48e10886-1d25-4003-a1c4-9cd3e9c58e80 policy-apex-pdp | [2025-06-16T14:56:52.986+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2dab126f-9d9c-40df-8665-fde68f19e9e7-2, groupId=2dab126f-9d9c-40df-8665-fde68f19e9e7] (Re-)joining group policy-apex-pdp | [2025-06-16T14:56:53.442+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls policy-apex-pdp | [2025-06-16T14:56:53.443+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls policy-apex-pdp | [2025-06-16T14:56:55.990+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2dab126f-9d9c-40df-8665-fde68f19e9e7-2, groupId=2dab126f-9d9c-40df-8665-fde68f19e9e7] Successfully joined group with generation Generation{generationId=1, memberId='consumer-2dab126f-9d9c-40df-8665-fde68f19e9e7-2-48e10886-1d25-4003-a1c4-9cd3e9c58e80', protocol='range'} policy-apex-pdp | [2025-06-16T14:56:55.999+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2dab126f-9d9c-40df-8665-fde68f19e9e7-2, groupId=2dab126f-9d9c-40df-8665-fde68f19e9e7] Finished assignment for group at generation 1: {consumer-2dab126f-9d9c-40df-8665-fde68f19e9e7-2-48e10886-1d25-4003-a1c4-9cd3e9c58e80=Assignment(partitions=[policy-pdp-pap-0])} policy-apex-pdp | [2025-06-16T14:56:56.034+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2dab126f-9d9c-40df-8665-fde68f19e9e7-2, groupId=2dab126f-9d9c-40df-8665-fde68f19e9e7] Successfully synced group in generation Generation{generationId=1, memberId='consumer-2dab126f-9d9c-40df-8665-fde68f19e9e7-2-48e10886-1d25-4003-a1c4-9cd3e9c58e80', protocol='range'} policy-apex-pdp | [2025-06-16T14:56:56.035+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2dab126f-9d9c-40df-8665-fde68f19e9e7-2, groupId=2dab126f-9d9c-40df-8665-fde68f19e9e7] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-apex-pdp | [2025-06-16T14:56:56.038+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2dab126f-9d9c-40df-8665-fde68f19e9e7-2, groupId=2dab126f-9d9c-40df-8665-fde68f19e9e7] Adding newly assigned partitions: policy-pdp-pap-0 policy-apex-pdp | [2025-06-16T14:56:56.052+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2dab126f-9d9c-40df-8665-fde68f19e9e7-2, groupId=2dab126f-9d9c-40df-8665-fde68f19e9e7] Found no committed offset for partition policy-pdp-pap-0 policy-apex-pdp | [2025-06-16T14:56:56.069+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2dab126f-9d9c-40df-8665-fde68f19e9e7-2, groupId=2dab126f-9d9c-40df-8665-fde68f19e9e7] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-apex-pdp | [2025-06-16T14:56:56.207+00:00|INFO|RequestLog|qtp1089680530-33] 172.17.0.5 - policyadmin [16/Jun/2025:14:56:56 +0000] "GET /metrics HTTP/1.1" 200 1922 "" "Prometheus/3.4.1" policy-apex-pdp | [2025-06-16T14:57:12.592+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"f5e59203-9419-414a-a427-70fdfc484626","timestampMs":1750085832592,"name":"apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7","pdpGroup":"defaultGroup"} policy-apex-pdp | [2025-06-16T14:57:12.619+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"f5e59203-9419-414a-a427-70fdfc484626","timestampMs":1750085832592,"name":"apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7","pdpGroup":"defaultGroup"} policy-apex-pdp | [2025-06-16T14:57:12.621+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2025-06-16T14:57:12.750+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"source":"pap-25244e0b-c047-452b-89c0-37e22a9824a8","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"44445093-e055-41f6-8cae-1170a53c5916","timestampMs":1750085832694,"name":"apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2025-06-16T14:57:12.780+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher policy-apex-pdp | [2025-06-16T14:57:12.780+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"e96c3548-fd91-4b4f-89de-8949aaced35d","timestampMs":1750085832780,"name":"apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7","pdpGroup":"defaultGroup"} policy-apex-pdp | [2025-06-16T14:57:12.783+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"44445093-e055-41f6-8cae-1170a53c5916","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"37f110c8-2be7-4db1-ae5a-2dcf042cbd1d","timestampMs":1750085832783,"name":"apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2025-06-16T14:57:12.798+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"e96c3548-fd91-4b4f-89de-8949aaced35d","timestampMs":1750085832780,"name":"apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7","pdpGroup":"defaultGroup"} policy-apex-pdp | [2025-06-16T14:57:12.798+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2025-06-16T14:57:12.798+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"44445093-e055-41f6-8cae-1170a53c5916","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"37f110c8-2be7-4db1-ae5a-2dcf042cbd1d","timestampMs":1750085832783,"name":"apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2025-06-16T14:57:12.799+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2025-06-16T14:57:12.834+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"source":"pap-25244e0b-c047-452b-89c0-37e22a9824a8","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"13d54e33-08ac-4c07-883b-b0a1aca6a7f1","timestampMs":1750085832694,"name":"apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2025-06-16T14:57:12.836+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"13d54e33-08ac-4c07-883b-b0a1aca6a7f1","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"eba19440-06b0-4538-88c0-2e0c36b5d8e5","timestampMs":1750085832836,"name":"apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2025-06-16T14:57:12.845+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"13d54e33-08ac-4c07-883b-b0a1aca6a7f1","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"eba19440-06b0-4538-88c0-2e0c36b5d8e5","timestampMs":1750085832836,"name":"apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2025-06-16T14:57:12.845+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2025-06-16T14:57:12.870+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"source":"pap-25244e0b-c047-452b-89c0-37e22a9824a8","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"807f3e4b-cf4b-4b69-8a31-82c20f855518","timestampMs":1750085832849,"name":"apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2025-06-16T14:57:12.871+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"807f3e4b-cf4b-4b69-8a31-82c20f855518","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"1ff2e457-3ca4-4833-a933-379eaf279048","timestampMs":1750085832871,"name":"apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2025-06-16T14:57:12.879+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"807f3e4b-cf4b-4b69-8a31-82c20f855518","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"1ff2e457-3ca4-4833-a933-379eaf279048","timestampMs":1750085832871,"name":"apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2025-06-16T14:57:12.879+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2025-06-16T14:57:20.222+00:00|INFO|RequestLog|qtp1089680530-29] 172.17.0.1 - - [16/Jun/2025:14:57:20 +0000] "GET / HTTP/1.1" 401 423 "" "curl/7.58.0" policy-apex-pdp | [2025-06-16T14:57:40.283+00:00|INFO|RequestLog|qtp1089680530-29] 172.17.0.1 - policyadmin [16/Jun/2025:14:57:40 +0000] "GET /policy/apex-pdp/v1/healthcheck HTTP/1.1" 200 109 "" "curl/7.58.0" policy-apex-pdp | [2025-06-16T14:57:56.081+00:00|INFO|RequestLog|qtp1089680530-27] 172.17.0.5 - policyadmin [16/Jun/2025:14:57:56 +0000] "GET /metrics HTTP/1.1" 200 2051 "" "Prometheus/3.4.1" policy-apex-pdp | [2025-06-16T14:58:56.083+00:00|INFO|RequestLog|qtp1089680530-29] 172.17.0.5 - policyadmin [16/Jun/2025:14:58:56 +0000] "GET /metrics HTTP/1.1" 200 2051 "" "Prometheus/3.4.1" policy-apex-pdp | [2025-06-16T14:59:12.781+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","policies":[],"messageName":"PDP_STATUS","requestId":"e442dcc1-36f4-4b66-8f96-77dae1d038d1","timestampMs":1750085952781,"name":"apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2025-06-16T14:59:12.793+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","policies":[],"messageName":"PDP_STATUS","requestId":"e442dcc1-36f4-4b66-8f96-77dae1d038d1","timestampMs":1750085952781,"name":"apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2025-06-16T14:59:12.793+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-api | Waiting for policy-db-migrator port 6824... policy-api | policy-db-migrator (172.17.0.7:6824) open policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml policy-api | policy-api | . ____ _ __ _ _ policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / policy-api | =========|_|==============|___/=/_/_/_/ policy-api | policy-api | :: Spring Boot :: (v3.4.6) policy-api | policy-api | [2025-06-16T14:56:30.598+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.2.Final policy-api | [2025-06-16T14:56:30.655+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.15 with PID 35 (/app/api.jar started by policy in /opt/app/policy/api/bin) policy-api | [2025-06-16T14:56:30.655+00:00|INFO|PolicyApiApplication|main] The following 1 profile is active: "default" policy-api | [2025-06-16T14:56:32.115+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-api | [2025-06-16T14:56:32.290+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 165 ms. Found 6 JPA repository interfaces. policy-api | [2025-06-16T14:56:32.968+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port 6969 (http) policy-api | [2025-06-16T14:56:32.981+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-api | [2025-06-16T14:56:32.983+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-api | [2025-06-16T14:56:32.983+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.41] policy-api | [2025-06-16T14:56:33.028+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext policy-api | [2025-06-16T14:56:33.028+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2314 ms policy-api | [2025-06-16T14:56:33.351+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-api | [2025-06-16T14:56:33.442+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.6.16.Final policy-api | [2025-06-16T14:56:33.492+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-api | [2025-06-16T14:56:33.918+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-api | [2025-06-16T14:56:33.958+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-api | [2025-06-16T14:56:34.162+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection@6ba226cd policy-api | [2025-06-16T14:56:34.165+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-api | [2025-06-16T14:56:34.248+00:00|INFO|pooling|main] HHH10001005: Database info: policy-api | Database JDBC URL [Connecting through datasource 'HikariDataSource (HikariPool-1)'] policy-api | Database driver: undefined/unknown policy-api | Database version: 16.4 policy-api | Autocommit mode: undefined/unknown policy-api | Isolation level: undefined/unknown policy-api | Minimum pool size: undefined/unknown policy-api | Maximum pool size: undefined/unknown policy-api | [2025-06-16T14:56:36.207+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-api | [2025-06-16T14:56:36.211+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-api | [2025-06-16T14:56:36.859+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml policy-api | [2025-06-16T14:56:37.691+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] policy-api | [2025-06-16T14:56:38.777+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-api | [2025-06-16T14:56:38.825+00:00|INFO|InitializeUserDetailsBeanManagerConfigurer$InitializeUserDetailsManagerConfigurer|main] Global AuthenticationManager configured with UserDetailsService bean with name inMemoryUserDetailsManager policy-api | [2025-06-16T14:56:39.482+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoints beneath base path '' policy-api | [2025-06-16T14:56:39.637+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-api | [2025-06-16T14:56:39.658+00:00|INFO|TomcatWebServer|main] Tomcat started on port 6969 (http) with context path '/policy/api/v1' policy-api | [2025-06-16T14:56:39.678+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 9.736 seconds (process running for 10.326) policy-api | [2025-06-16T14:56:39.966+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-1] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-api | [2025-06-16T14:56:39.967+00:00|INFO|DispatcherServlet|http-nio-6969-exec-1] Initializing Servlet 'dispatcherServlet' policy-api | [2025-06-16T14:56:39.968+00:00|INFO|DispatcherServlet|http-nio-6969-exec-1] Completed initialization in 1 ms policy-api | [2025-06-16T14:58:25.301+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-4] ***** OrderedServiceImpl implementers: policy-api | [] policy-csit | Invoking the robot tests from: pap-test.robot pap-slas.robot policy-csit | Run Robot test policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates policy-csit | -v POLICY_API_IP:policy-api:6969 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 policy-csit | -v APEX_IP:policy-apex-pdp:6969 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 policy-csit | -v KAFKA_IP:kafka:9092 policy-csit | -v PROMETHEUS_IP:prometheus:9090 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 policy-csit | -v POLICY_OPA_IP:policy-opa-pdp:8282 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 policy-csit | -v TEMP_FOLDER:/tmp/distribution policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 policy-csit | -v TEST_ENV:docker policy-csit | -v JAEGER_IP:jaeger:16686 policy-csit | Starting Robot test suites ... policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas.Pap-Test policy-csit | ============================================================================== policy-csit | LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | LoadNodeTemplates :: Create node templates in database using speci... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Healthcheck :: Verify policy pap health check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Consolidated Healthcheck :: Verify policy consolidated health check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Metrics :: Verify policy pap is exporting prometheus metrics | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | DeployPdpGroups :: Deploy policies in PdpGroups | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Pap-Test & Pap-Slas.Pap-Test | PASS | policy-csit | 22 tests, 22 passed, 0 failed policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas.Pap-Slas policy-csit | ============================================================================== policy-csit | WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Pap-Test & Pap-Slas.Pap-Slas | PASS | policy-csit | 8 tests, 8 passed, 0 failed policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas | PASS | policy-csit | 30 tests, 30 passed, 0 failed policy-csit | ============================================================================== policy-csit | Output: /tmp/results/output.xml policy-csit | Log: /tmp/results/log.html policy-csit | Report: /tmp/results/report.html policy-csit | RESULT: 0 policy-db-migrator | Waiting for postgres port 5432... policy-db-migrator | nc: connect to postgres (172.17.0.3) port 5432 (tcp) failed: Connection refused policy-db-migrator | nc: connect to postgres (172.17.0.3) port 5432 (tcp) failed: Connection refused policy-db-migrator | Connection to postgres (172.17.0.3) 5432 port [tcp/postgresql] succeeded! policy-db-migrator | Initializing policyadmin... policy-db-migrator | 321 blocks policy-db-migrator | Preparing upgrade release version: 0800 policy-db-migrator | Preparing upgrade release version: 0900 policy-db-migrator | Preparing upgrade release version: 1000 policy-db-migrator | Preparing upgrade release version: 1100 policy-db-migrator | Preparing upgrade release version: 1200 policy-db-migrator | Preparing upgrade release version: 1300 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | -------------+--------- policy-db-migrator | policyadmin | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "policyadmin_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | upgrade: 0 -> 1300 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0450-pdpgroup.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0460-pdppolicystatus.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0470-pdp.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0480-pdpstatistics.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-pdpsubgroup.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0570-toscadatatype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0580-toscadatatypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-toscanodetemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0610-toscanodetemplates.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0630-toscanodetype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0640-toscanodetypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0660-toscaparameter.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0670-toscapolicies.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0690-toscapolicy.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-toscapolicytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0710-toscapolicytypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0730-toscaproperty.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0770-toscarequirement.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0780-toscarequirements.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0820-toscatrigger.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-pdp.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-pdpstatistics.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0150-pdpstatistics.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0210-sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0220-sequence.sql policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-toscatrigger.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0140-toscaparameter.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0150-toscaproperty.sql policy-db-migrator | DROP TABLE policy-db-migrator | DROP TABLE policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-upgrade.sql policy-db-migrator | msg policy-db-migrator | --------------------------- policy-db-migrator | upgrade to 1100 completed policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-db-migrator | DROP INDEX policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-audit_sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-statistics_sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-pdpstatistics.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-statistics_sequence.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | policyadmin: OK: upgrade (1300) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "policyadmin_schema_changelog" already exists, skipping policy-db-migrator | name | version policy-db-migrator | -------------+--------- policy-db-migrator | policyadmin | 1300 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | -----+---------------------------------------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-jpapdpgroup_properties.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:17.935693 policy-db-migrator | 2 | 0110-jpapdpstatistics_enginestats.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:17.987631 policy-db-migrator | 3 | 0120-jpapdpsubgroup_policies.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:18.042251 policy-db-migrator | 4 | 0130-jpapdpsubgroup_properties.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:18.094314 policy-db-migrator | 5 | 0140-jpapdpsubgroup_supportedpolicytypes.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:18.146246 policy-db-migrator | 6 | 0150-jpatoscacapabilityassignment_attributes.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:18.200455 policy-db-migrator | 7 | 0160-jpatoscacapabilityassignment_metadata.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:18.251777 policy-db-migrator | 8 | 0170-jpatoscacapabilityassignment_occurrences.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:18.297697 policy-db-migrator | 9 | 0180-jpatoscacapabilityassignment_properties.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:18.350338 policy-db-migrator | 10 | 0190-jpatoscacapabilitytype_metadata.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:18.397009 policy-db-migrator | 11 | 0200-jpatoscacapabilitytype_properties.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:18.450482 policy-db-migrator | 12 | 0210-jpatoscadatatype_constraints.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:18.500608 policy-db-migrator | 13 | 0220-jpatoscadatatype_metadata.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:18.558888 policy-db-migrator | 14 | 0230-jpatoscadatatype_properties.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:18.601305 policy-db-migrator | 15 | 0240-jpatoscanodetemplate_metadata.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:18.651195 policy-db-migrator | 16 | 0250-jpatoscanodetemplate_properties.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:18.698474 policy-db-migrator | 17 | 0260-jpatoscanodetype_metadata.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:18.746017 policy-db-migrator | 18 | 0270-jpatoscanodetype_properties.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:18.787975 policy-db-migrator | 19 | 0280-jpatoscapolicy_metadata.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:18.838584 policy-db-migrator | 20 | 0290-jpatoscapolicy_properties.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:18.88736 policy-db-migrator | 21 | 0300-jpatoscapolicy_targets.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:18.931596 policy-db-migrator | 22 | 0310-jpatoscapolicytype_metadata.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:18.97524 policy-db-migrator | 23 | 0320-jpatoscapolicytype_properties.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:19.024743 policy-db-migrator | 24 | 0330-jpatoscapolicytype_targets.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:19.068952 policy-db-migrator | 25 | 0340-jpatoscapolicytype_triggers.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:19.115539 policy-db-migrator | 26 | 0350-jpatoscaproperty_constraints.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:19.160796 policy-db-migrator | 27 | 0360-jpatoscaproperty_metadata.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:19.207063 policy-db-migrator | 28 | 0370-jpatoscarelationshiptype_metadata.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:19.249041 policy-db-migrator | 29 | 0380-jpatoscarelationshiptype_properties.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:19.292119 policy-db-migrator | 30 | 0390-jpatoscarequirement_metadata.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:19.335531 policy-db-migrator | 31 | 0400-jpatoscarequirement_occurrences.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:19.377162 policy-db-migrator | 32 | 0410-jpatoscarequirement_properties.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:19.428203 policy-db-migrator | 33 | 0420-jpatoscaservicetemplate_metadata.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:19.485438 policy-db-migrator | 34 | 0430-jpatoscatopologytemplate_inputs.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:19.541349 policy-db-migrator | 35 | 0440-pdpgroup_pdpsubgroup.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:19.603147 policy-db-migrator | 36 | 0450-pdpgroup.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:19.654953 policy-db-migrator | 37 | 0460-pdppolicystatus.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:19.710985 policy-db-migrator | 38 | 0470-pdp.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:19.761975 policy-db-migrator | 39 | 0480-pdpstatistics.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:19.817013 policy-db-migrator | 40 | 0490-pdpsubgroup_pdp.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:19.87044 policy-db-migrator | 41 | 0500-pdpsubgroup.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:19.929901 policy-db-migrator | 42 | 0510-toscacapabilityassignment.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:19.989804 policy-db-migrator | 43 | 0520-toscacapabilityassignments.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:20.043381 policy-db-migrator | 44 | 0530-toscacapabilityassignments_toscacapabilityassignment.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:20.097099 policy-db-migrator | 45 | 0540-toscacapabilitytype.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:20.148773 policy-db-migrator | 46 | 0550-toscacapabilitytypes.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:20.200876 policy-db-migrator | 47 | 0560-toscacapabilitytypes_toscacapabilitytype.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:20.258218 policy-db-migrator | 48 | 0570-toscadatatype.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:20.312098 policy-db-migrator | 49 | 0580-toscadatatypes.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:20.371935 policy-db-migrator | 50 | 0590-toscadatatypes_toscadatatype.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:20.42492 policy-db-migrator | 51 | 0600-toscanodetemplate.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:20.481601 policy-db-migrator | 52 | 0610-toscanodetemplates.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:20.527786 policy-db-migrator | 53 | 0620-toscanodetemplates_toscanodetemplate.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:20.581323 policy-db-migrator | 54 | 0630-toscanodetype.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:20.638141 policy-db-migrator | 55 | 0640-toscanodetypes.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:20.68922 policy-db-migrator | 56 | 0650-toscanodetypes_toscanodetype.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:20.739889 policy-db-migrator | 57 | 0660-toscaparameter.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:20.802007 policy-db-migrator | 58 | 0670-toscapolicies.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:20.851107 policy-db-migrator | 59 | 0680-toscapolicies_toscapolicy.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:20.901833 policy-db-migrator | 60 | 0690-toscapolicy.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:20.95764 policy-db-migrator | 61 | 0700-toscapolicytype.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:21.015955 policy-db-migrator | 62 | 0710-toscapolicytypes.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:21.067724 policy-db-migrator | 63 | 0720-toscapolicytypes_toscapolicytype.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:21.121874 policy-db-migrator | 64 | 0730-toscaproperty.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:21.179927 policy-db-migrator | 65 | 0740-toscarelationshiptype.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:21.236796 policy-db-migrator | 66 | 0750-toscarelationshiptypes.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:21.2949 policy-db-migrator | 67 | 0760-toscarelationshiptypes_toscarelationshiptype.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:21.346622 policy-db-migrator | 68 | 0770-toscarequirement.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:21.404738 policy-db-migrator | 69 | 0780-toscarequirements.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:21.457653 policy-db-migrator | 70 | 0790-toscarequirements_toscarequirement.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:21.512696 policy-db-migrator | 71 | 0800-toscaservicetemplate.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:21.563402 policy-db-migrator | 72 | 0810-toscatopologytemplate.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:21.61852 policy-db-migrator | 73 | 0820-toscatrigger.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:21.670054 policy-db-migrator | 74 | 0830-FK_ToscaNodeTemplate_capabilitiesName.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:21.719309 policy-db-migrator | 75 | 0840-FK_ToscaNodeTemplate_requirementsName.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:21.769997 policy-db-migrator | 76 | 0850-FK_ToscaNodeType_requirementsName.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:21.815864 policy-db-migrator | 77 | 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:21.867577 policy-db-migrator | 78 | 0870-FK_ToscaServiceTemplate_dataTypesName.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:21.917551 policy-db-migrator | 79 | 0880-FK_ToscaServiceTemplate_nodeTypesName.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:21.962229 policy-db-migrator | 80 | 0890-FK_ToscaServiceTemplate_policyTypesName.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:22.009486 policy-db-migrator | 81 | 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:22.059104 policy-db-migrator | 82 | 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:22.110691 policy-db-migrator | 83 | 0920-FK_ToscaTopologyTemplate_policyName.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:22.162202 policy-db-migrator | 84 | 0940-PdpPolicyStatus_PdpGroup.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:22.211607 policy-db-migrator | 85 | 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:22.269205 policy-db-migrator | 86 | 0960-FK_ToscaNodeTemplate_capabilitiesName.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:22.319725 policy-db-migrator | 87 | 0970-FK_ToscaNodeTemplate_requirementsName.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:22.3689 policy-db-migrator | 88 | 0980-FK_ToscaNodeType_requirementsName.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:22.416159 policy-db-migrator | 89 | 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:22.462468 policy-db-migrator | 90 | 1000-FK_ToscaServiceTemplate_dataTypesName.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:22.511825 policy-db-migrator | 91 | 1010-FK_ToscaServiceTemplate_nodeTypesName.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:22.5607 policy-db-migrator | 92 | 1020-FK_ToscaServiceTemplate_policyTypesName.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:22.612251 policy-db-migrator | 93 | 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:22.668097 policy-db-migrator | 94 | 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:22.717207 policy-db-migrator | 95 | 1050-FK_ToscaTopologyTemplate_policyName.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:22.772118 policy-db-migrator | 96 | 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql | upgrade | 0 | 0800 | 1606251456170800u | 1 | 2025-06-16 14:56:22.822356 policy-db-migrator | 97 | 0100-pdp.sql | upgrade | 0800 | 0900 | 1606251456170900u | 1 | 2025-06-16 14:56:22.86976 policy-db-migrator | 98 | 0110-idx_tsidx1.sql | upgrade | 0800 | 0900 | 1606251456170900u | 1 | 2025-06-16 14:56:22.921831 policy-db-migrator | 99 | 0120-pk_pdpstatistics.sql | upgrade | 0800 | 0900 | 1606251456170900u | 1 | 2025-06-16 14:56:22.970694 policy-db-migrator | 100 | 0130-pdpstatistics.sql | upgrade | 0800 | 0900 | 1606251456170900u | 1 | 2025-06-16 14:56:23.020831 policy-db-migrator | 101 | 0140-pk_pdpstatistics.sql | upgrade | 0800 | 0900 | 1606251456170900u | 1 | 2025-06-16 14:56:23.076477 policy-db-migrator | 102 | 0150-pdpstatistics.sql | upgrade | 0800 | 0900 | 1606251456170900u | 1 | 2025-06-16 14:56:23.12517 policy-db-migrator | 103 | 0160-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1606251456170900u | 1 | 2025-06-16 14:56:23.176285 policy-db-migrator | 104 | 0170-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1606251456170900u | 1 | 2025-06-16 14:56:23.230824 policy-db-migrator | 105 | 0180-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1606251456170900u | 1 | 2025-06-16 14:56:23.282533 policy-db-migrator | 106 | 0190-jpapolicyaudit.sql | upgrade | 0800 | 0900 | 1606251456170900u | 1 | 2025-06-16 14:56:23.341579 policy-db-migrator | 107 | 0200-JpaPolicyAuditIndex_timestamp.sql | upgrade | 0800 | 0900 | 1606251456170900u | 1 | 2025-06-16 14:56:23.395534 policy-db-migrator | 108 | 0210-sequence.sql | upgrade | 0800 | 0900 | 1606251456170900u | 1 | 2025-06-16 14:56:23.452789 policy-db-migrator | 109 | 0220-sequence.sql | upgrade | 0800 | 0900 | 1606251456170900u | 1 | 2025-06-16 14:56:23.505331 policy-db-migrator | 110 | 0100-jpatoscapolicy_targets.sql | upgrade | 0900 | 1000 | 1606251456171000u | 1 | 2025-06-16 14:56:23.554862 policy-db-migrator | 111 | 0110-jpatoscapolicytype_targets.sql | upgrade | 0900 | 1000 | 1606251456171000u | 1 | 2025-06-16 14:56:23.607712 policy-db-migrator | 112 | 0120-toscatrigger.sql | upgrade | 0900 | 1000 | 1606251456171000u | 1 | 2025-06-16 14:56:23.658662 policy-db-migrator | 113 | 0130-jpatoscapolicytype_triggers.sql | upgrade | 0900 | 1000 | 1606251456171000u | 1 | 2025-06-16 14:56:23.723469 policy-db-migrator | 114 | 0140-toscaparameter.sql | upgrade | 0900 | 1000 | 1606251456171000u | 1 | 2025-06-16 14:56:23.775007 policy-db-migrator | 115 | 0150-toscaproperty.sql | upgrade | 0900 | 1000 | 1606251456171000u | 1 | 2025-06-16 14:56:23.827666 policy-db-migrator | 116 | 0160-jpapolicyaudit_pk.sql | upgrade | 0900 | 1000 | 1606251456171000u | 1 | 2025-06-16 14:56:23.885453 policy-db-migrator | 117 | 0170-pdpstatistics_pk.sql | upgrade | 0900 | 1000 | 1606251456171000u | 1 | 2025-06-16 14:56:23.938964 policy-db-migrator | 118 | 0180-jpatoscanodetemplate_metadata.sql | upgrade | 0900 | 1000 | 1606251456171000u | 1 | 2025-06-16 14:56:23.990514 policy-db-migrator | 119 | 0100-upgrade.sql | upgrade | 1000 | 1100 | 1606251456171100u | 1 | 2025-06-16 14:56:24.035321 policy-db-migrator | 120 | 0100-jpapolicyaudit_renameuser.sql | upgrade | 1100 | 1200 | 1606251456171200u | 1 | 2025-06-16 14:56:24.081998 policy-db-migrator | 121 | 0110-idx_tsidx1.sql | upgrade | 1100 | 1200 | 1606251456171200u | 1 | 2025-06-16 14:56:24.143824 policy-db-migrator | 122 | 0120-audit_sequence.sql | upgrade | 1100 | 1200 | 1606251456171200u | 1 | 2025-06-16 14:56:24.212143 policy-db-migrator | 123 | 0130-statistics_sequence.sql | upgrade | 1100 | 1200 | 1606251456171200u | 1 | 2025-06-16 14:56:24.27501 policy-db-migrator | 124 | 0100-pdpstatistics.sql | upgrade | 1200 | 1300 | 1606251456171300u | 1 | 2025-06-16 14:56:24.327299 policy-db-migrator | 125 | 0110-jpapdpstatistics_enginestats.sql | upgrade | 1200 | 1300 | 1606251456171300u | 1 | 2025-06-16 14:56:24.37745 policy-db-migrator | 126 | 0120-statistics_sequence.sql | upgrade | 1200 | 1300 | 1606251456171300u | 1 | 2025-06-16 14:56:24.432056 policy-db-migrator | (126 rows) policy-db-migrator | policy-db-migrator | policyadmin: OK @ 1300 policy-db-migrator | Initializing clampacm... policy-db-migrator | 97 blocks policy-db-migrator | Preparing upgrade release version: 1400 policy-db-migrator | Preparing upgrade release version: 1500 policy-db-migrator | Preparing upgrade release version: 1600 policy-db-migrator | Preparing upgrade release version: 1601 policy-db-migrator | Preparing upgrade release version: 1700 policy-db-migrator | Preparing upgrade release version: 1701 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | ----------+--------- policy-db-migrator | clampacm | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | clampacm: upgrade available: 0 -> 1701 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | NOTICE: relation "clampacm_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | upgrade: 0 -> 1701 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositiondefinition.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-automationcompositionelement.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-nodetemplatestate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-participant.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-participantsupportedelements.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-ac_compositionId_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-ac_element_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0900-dt_element_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1000-supported_element_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1100-automationcompositionelement_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1200-nodetemplate_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1300-participantsupportedelements_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | ALTER TABLE policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositiondefinition.sql policy-db-migrator | ALTER TABLE policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-participantreplica.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-participant.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-participant_replica_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-participant_replica_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-automationcompositionelement.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-nodetemplatestate.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositionelement.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositionelement.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-message.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-messagejob.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-messagejob_identificationId_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcompositionrollback.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcomposition.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-automationcompositionelement.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-automationcomposition_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-automationcompositiondefinition.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-nodetemplatestate.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-mb_identificationId_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-participantreplica.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0900-participantsupportedacelements.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | clampacm: OK: upgrade (1701) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "clampacm_schema_changelog" already exists, skipping policy-db-migrator | name | version policy-db-migrator | ----------+--------- policy-db-migrator | clampacm | 1701 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------------------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-automationcomposition.sql | upgrade | 1300 | 1400 | 1606251456251400u | 1 | 2025-06-16 14:56:25.087441 policy-db-migrator | 2 | 0200-automationcompositiondefinition.sql | upgrade | 1300 | 1400 | 1606251456251400u | 1 | 2025-06-16 14:56:25.145266 policy-db-migrator | 3 | 0300-automationcompositionelement.sql | upgrade | 1300 | 1400 | 1606251456251400u | 1 | 2025-06-16 14:56:25.205907 policy-db-migrator | 4 | 0400-nodetemplatestate.sql | upgrade | 1300 | 1400 | 1606251456251400u | 1 | 2025-06-16 14:56:25.270447 policy-db-migrator | 5 | 0500-participant.sql | upgrade | 1300 | 1400 | 1606251456251400u | 1 | 2025-06-16 14:56:25.328613 policy-db-migrator | 6 | 0600-participantsupportedelements.sql | upgrade | 1300 | 1400 | 1606251456251400u | 1 | 2025-06-16 14:56:25.392551 policy-db-migrator | 7 | 0700-ac_compositionId_index.sql | upgrade | 1300 | 1400 | 1606251456251400u | 1 | 2025-06-16 14:56:25.438235 policy-db-migrator | 8 | 0800-ac_element_fk_index.sql | upgrade | 1300 | 1400 | 1606251456251400u | 1 | 2025-06-16 14:56:25.489261 policy-db-migrator | 9 | 0900-dt_element_fk_index.sql | upgrade | 1300 | 1400 | 1606251456251400u | 1 | 2025-06-16 14:56:25.543444 policy-db-migrator | 10 | 1000-supported_element_fk_index.sql | upgrade | 1300 | 1400 | 1606251456251400u | 1 | 2025-06-16 14:56:25.600239 policy-db-migrator | 11 | 1100-automationcompositionelement_fk.sql | upgrade | 1300 | 1400 | 1606251456251400u | 1 | 2025-06-16 14:56:25.6612 policy-db-migrator | 12 | 1200-nodetemplate_fk.sql | upgrade | 1300 | 1400 | 1606251456251400u | 1 | 2025-06-16 14:56:25.718847 policy-db-migrator | 13 | 1300-participantsupportedelements_fk.sql | upgrade | 1300 | 1400 | 1606251456251400u | 1 | 2025-06-16 14:56:25.778102 policy-db-migrator | 14 | 0100-automationcomposition.sql | upgrade | 1400 | 1500 | 1606251456251500u | 1 | 2025-06-16 14:56:25.842188 policy-db-migrator | 15 | 0200-automationcompositiondefinition.sql | upgrade | 1400 | 1500 | 1606251456251500u | 1 | 2025-06-16 14:56:25.897745 policy-db-migrator | 16 | 0300-participantreplica.sql | upgrade | 1400 | 1500 | 1606251456251500u | 1 | 2025-06-16 14:56:25.956424 policy-db-migrator | 17 | 0400-participant.sql | upgrade | 1400 | 1500 | 1606251456251500u | 1 | 2025-06-16 14:56:26.00915 policy-db-migrator | 18 | 0500-participant_replica_fk_index.sql | upgrade | 1400 | 1500 | 1606251456251500u | 1 | 2025-06-16 14:56:26.065887 policy-db-migrator | 19 | 0600-participant_replica_fk.sql | upgrade | 1400 | 1500 | 1606251456251500u | 1 | 2025-06-16 14:56:26.114737 policy-db-migrator | 20 | 0700-automationcompositionelement.sql | upgrade | 1400 | 1500 | 1606251456251500u | 1 | 2025-06-16 14:56:26.162397 policy-db-migrator | 21 | 0800-nodetemplatestate.sql | upgrade | 1400 | 1500 | 1606251456251500u | 1 | 2025-06-16 14:56:26.210407 policy-db-migrator | 22 | 0100-automationcomposition.sql | upgrade | 1500 | 1600 | 1606251456251600u | 1 | 2025-06-16 14:56:26.261039 policy-db-migrator | 23 | 0200-automationcompositionelement.sql | upgrade | 1500 | 1600 | 1606251456251600u | 1 | 2025-06-16 14:56:26.309846 policy-db-migrator | 24 | 0100-automationcomposition.sql | upgrade | 1501 | 1601 | 1606251456251601u | 1 | 2025-06-16 14:56:26.362213 policy-db-migrator | 25 | 0200-automationcompositionelement.sql | upgrade | 1501 | 1601 | 1606251456251601u | 1 | 2025-06-16 14:56:26.409811 policy-db-migrator | 26 | 0100-message.sql | upgrade | 1600 | 1700 | 1606251456251700u | 1 | 2025-06-16 14:56:26.465595 policy-db-migrator | 27 | 0200-messagejob.sql | upgrade | 1600 | 1700 | 1606251456251700u | 1 | 2025-06-16 14:56:26.523023 policy-db-migrator | 28 | 0300-messagejob_identificationId_index.sql | upgrade | 1600 | 1700 | 1606251456251700u | 1 | 2025-06-16 14:56:26.577224 policy-db-migrator | 29 | 0100-automationcompositionrollback.sql | upgrade | 1601 | 1701 | 1606251456251701u | 1 | 2025-06-16 14:56:26.639224 policy-db-migrator | 30 | 0200-automationcomposition.sql | upgrade | 1601 | 1701 | 1606251456251701u | 1 | 2025-06-16 14:56:26.692591 policy-db-migrator | 31 | 0300-automationcompositionelement.sql | upgrade | 1601 | 1701 | 1606251456251701u | 1 | 2025-06-16 14:56:26.746608 policy-db-migrator | 32 | 0400-automationcomposition_fk.sql | upgrade | 1601 | 1701 | 1606251456251701u | 1 | 2025-06-16 14:56:26.801248 policy-db-migrator | 33 | 0500-automationcompositiondefinition.sql | upgrade | 1601 | 1701 | 1606251456251701u | 1 | 2025-06-16 14:56:26.853297 policy-db-migrator | 34 | 0600-nodetemplatestate.sql | upgrade | 1601 | 1701 | 1606251456251701u | 1 | 2025-06-16 14:56:26.908404 policy-db-migrator | 35 | 0700-mb_identificationId_index.sql | upgrade | 1601 | 1701 | 1606251456251701u | 1 | 2025-06-16 14:56:26.963816 policy-db-migrator | 36 | 0800-participantreplica.sql | upgrade | 1601 | 1701 | 1606251456251701u | 1 | 2025-06-16 14:56:27.016708 policy-db-migrator | 37 | 0900-participantsupportedacelements.sql | upgrade | 1601 | 1701 | 1606251456251701u | 1 | 2025-06-16 14:56:27.073749 policy-db-migrator | (37 rows) policy-db-migrator | policy-db-migrator | clampacm: OK @ 1701 policy-db-migrator | Initializing pooling... policy-db-migrator | 4 blocks policy-db-migrator | Preparing upgrade release version: 1600 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | ---------+--------- policy-db-migrator | pooling | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | pooling: upgrade available: 0 -> 1600 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "pooling_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | upgrade: 0 -> 1600 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-distributed.locking.sql policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE INDEX policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | pooling: OK: upgrade (1600) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "pooling_schema_changelog" already exists, skipping policy-db-migrator | name | version policy-db-migrator | ---------+--------- policy-db-migrator | pooling | 1600 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-distributed.locking.sql | upgrade | 1500 | 1600 | 1606251456271600u | 1 | 2025-06-16 14:56:27.765769 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | pooling: OK @ 1600 policy-db-migrator | Initializing operationshistory... policy-db-migrator | 6 blocks policy-db-migrator | Preparing upgrade release version: 1600 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | -------------------+--------- policy-db-migrator | operationshistory | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | operationshistory: upgrade available: 0 -> 1600 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "operationshistory_schema_changelog" already exists, skipping policy-db-migrator | upgrade: 0 -> 1600 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-ophistory_id_sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-operationshistory.sql policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE INDEX policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | operationshistory: OK: upgrade (1600) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "operationshistory_schema_changelog" already exists, skipping policy-db-migrator | name | version policy-db-migrator | -------------------+--------- policy-db-migrator | operationshistory | 1600 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-ophistory_id_sequence.sql | upgrade | 1500 | 1600 | 1606251456281600u | 1 | 2025-06-16 14:56:28.446492 policy-db-migrator | 2 | 0110-operationshistory.sql | upgrade | 1500 | 1600 | 1606251456281600u | 1 | 2025-06-16 14:56:28.51597 policy-db-migrator | (2 rows) policy-db-migrator | policy-db-migrator | operationshistory: OK @ 1600 policy-pap | Waiting for api port 6969... policy-pap | api (172.17.0.8:6969) open policy-pap | Waiting for kafka port 9092... policy-pap | kafka (172.17.0.6:9092) open policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json policy-pap | policy-pap | . ____ _ __ _ _ policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / policy-pap | =========|_|==============|___/=/_/_/_/ policy-pap | policy-pap | :: Spring Boot :: (v3.4.6) policy-pap | policy-pap | [2025-06-16T14:56:42.213+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.15 with PID 57 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) policy-pap | [2025-06-16T14:56:42.214+00:00|INFO|PolicyPapApplication|main] The following 1 profile is active: "default" policy-pap | [2025-06-16T14:56:43.581+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-pap | [2025-06-16T14:56:43.669+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 75 ms. Found 7 JPA repository interfaces. policy-pap | [2025-06-16T14:56:44.634+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port 6969 (http) policy-pap | [2025-06-16T14:56:44.648+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-pap | [2025-06-16T14:56:44.650+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-pap | [2025-06-16T14:56:44.650+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.41] policy-pap | [2025-06-16T14:56:44.703+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext policy-pap | [2025-06-16T14:56:44.704+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2436 ms policy-pap | [2025-06-16T14:56:45.111+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-pap | [2025-06-16T14:56:45.185+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.6.16.Final policy-pap | [2025-06-16T14:56:45.226+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-pap | [2025-06-16T14:56:45.621+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-pap | [2025-06-16T14:56:45.666+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-pap | [2025-06-16T14:56:45.872+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection@6e337ba1 policy-pap | [2025-06-16T14:56:45.874+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-pap | [2025-06-16T14:56:45.963+00:00|INFO|pooling|main] HHH10001005: Database info: policy-pap | Database JDBC URL [Connecting through datasource 'HikariDataSource (HikariPool-1)'] policy-pap | Database driver: undefined/unknown policy-pap | Database version: 16.4 policy-pap | Autocommit mode: undefined/unknown policy-pap | Isolation level: undefined/unknown policy-pap | Minimum pool size: undefined/unknown policy-pap | Maximum pool size: undefined/unknown policy-pap | [2025-06-16T14:56:47.884+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-pap | [2025-06-16T14:56:47.887+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-pap | [2025-06-16T14:56:49.067+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-f9da0d51-d8fe-4efe-818b-a9b7c652cdb2-1 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = f9da0d51-d8fe-4efe-818b-a9b7c652cdb2 policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-16T14:56:49.137+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-16T14:56:49.277+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-16T14:56:49.277+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-16T14:56:49.277+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750085809276 policy-pap | [2025-06-16T14:56:49.280+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-f9da0d51-d8fe-4efe-818b-a9b7c652cdb2-1, groupId=f9da0d51-d8fe-4efe-818b-a9b7c652cdb2] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-16T14:56:49.281+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-policy-pap-2 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = policy-pap policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-16T14:56:49.281+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-16T14:56:49.290+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-16T14:56:49.290+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-16T14:56:49.290+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750085809290 policy-pap | [2025-06-16T14:56:49.290+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-16T14:56:49.633+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json policy-pap | [2025-06-16T14:56:49.748+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-pap | [2025-06-16T14:56:49.820+00:00|INFO|InitializeUserDetailsBeanManagerConfigurer$InitializeUserDetailsManagerConfigurer|main] Global AuthenticationManager configured with UserDetailsService bean with name inMemoryUserDetailsManager policy-pap | [2025-06-16T14:56:50.019+00:00|INFO|OptionalValidatorFactoryBean|main] Failed to set up a Bean Validation provider: jakarta.validation.NoProviderFoundException: Unable to create a Configuration, because no Jakarta Validation provider could be found. Add a provider like Hibernate Validator (RI) to your classpath. policy-pap | [2025-06-16T14:56:50.719+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoints beneath base path '' policy-pap | [2025-06-16T14:56:50.848+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-pap | [2025-06-16T14:56:50.867+00:00|INFO|TomcatWebServer|main] Tomcat started on port 6969 (http) with context path '/policy/pap/v1' policy-pap | [2025-06-16T14:56:50.889+00:00|INFO|ServiceManager|main] Policy PAP starting policy-pap | [2025-06-16T14:56:50.889+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry policy-pap | [2025-06-16T14:56:50.890+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters policy-pap | [2025-06-16T14:56:50.891+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener policy-pap | [2025-06-16T14:56:50.891+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher policy-pap | [2025-06-16T14:56:50.891+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher policy-pap | [2025-06-16T14:56:50.891+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher policy-pap | [2025-06-16T14:56:50.893+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=f9da0d51-d8fe-4efe-818b-a9b7c652cdb2, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@bfab0dc policy-pap | [2025-06-16T14:56:50.902+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=f9da0d51-d8fe-4efe-818b-a9b7c652cdb2, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-16T14:56:50.903+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-f9da0d51-d8fe-4efe-818b-a9b7c652cdb2-3 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = f9da0d51-d8fe-4efe-818b-a9b7c652cdb2 policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-16T14:56:50.903+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-16T14:56:50.911+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-16T14:56:50.911+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-16T14:56:50.911+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750085810911 policy-pap | [2025-06-16T14:56:50.911+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-f9da0d51-d8fe-4efe-818b-a9b7c652cdb2-3, groupId=f9da0d51-d8fe-4efe-818b-a9b7c652cdb2] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-16T14:56:50.912+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher policy-pap | [2025-06-16T14:56:50.912+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=f5716082-4cdf-41f2-abbb-f37d3075ee74, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@6dbdfa38 policy-pap | [2025-06-16T14:56:50.912+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=f5716082-4cdf-41f2-abbb-f37d3075ee74, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-16T14:56:50.912+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-policy-pap-4 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = policy-pap policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-16T14:56:50.912+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-16T14:56:50.918+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-16T14:56:50.918+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-16T14:56:50.918+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750085810918 policy-pap | [2025-06-16T14:56:50.918+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-16T14:56:50.918+00:00|INFO|ServiceManager|main] Policy PAP starting topics policy-pap | [2025-06-16T14:56:50.919+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=f5716082-4cdf-41f2-abbb-f37d3075ee74, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-16T14:56:50.919+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=f9da0d51-d8fe-4efe-818b-a9b7c652cdb2, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-16T14:56:50.919+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=045c3a28-4638-4f53-8565-519066005af3, alive=false, publisher=null]]: starting policy-pap | [2025-06-16T14:56:50.931+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-pap | acks = -1 policy-pap | auto.include.jmx.reporter = true policy-pap | batch.size = 16384 policy-pap | bootstrap.servers = [kafka:9092] policy-pap | buffer.memory = 33554432 policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = producer-1 policy-pap | compression.gzip.level = -1 policy-pap | compression.lz4.level = 9 policy-pap | compression.type = none policy-pap | compression.zstd.level = 3 policy-pap | connections.max.idle.ms = 540000 policy-pap | delivery.timeout.ms = 120000 policy-pap | enable.idempotence = true policy-pap | enable.metrics.push = true policy-pap | interceptor.classes = [] policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | linger.ms = 0 policy-pap | max.block.ms = 60000 policy-pap | max.in.flight.requests.per.connection = 5 policy-pap | max.request.size = 1048576 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.max.idle.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partitioner.adaptive.partitioning.enable = true policy-pap | partitioner.availability.timeout.ms = 0 policy-pap | partitioner.class = null policy-pap | partitioner.ignore.keys = false policy-pap | receive.buffer.bytes = 32768 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retries = 2147483647 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | transaction.timeout.ms = 60000 policy-pap | transactional.id = null policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | policy-pap | [2025-06-16T14:56:50.932+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-16T14:56:50.943+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-pap | [2025-06-16T14:56:50.959+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-16T14:56:50.959+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-16T14:56:50.959+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750085810959 policy-pap | [2025-06-16T14:56:50.959+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=045c3a28-4638-4f53-8565-519066005af3, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-pap | [2025-06-16T14:56:50.960+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=b32b18a6-f565-4adc-ba36-dc6d685b61e8, alive=false, publisher=null]]: starting policy-pap | [2025-06-16T14:56:50.960+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-pap | acks = -1 policy-pap | auto.include.jmx.reporter = true policy-pap | batch.size = 16384 policy-pap | bootstrap.servers = [kafka:9092] policy-pap | buffer.memory = 33554432 policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = producer-2 policy-pap | compression.gzip.level = -1 policy-pap | compression.lz4.level = 9 policy-pap | compression.type = none policy-pap | compression.zstd.level = 3 policy-pap | connections.max.idle.ms = 540000 policy-pap | delivery.timeout.ms = 120000 policy-pap | enable.idempotence = true policy-pap | enable.metrics.push = true policy-pap | interceptor.classes = [] policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | linger.ms = 0 policy-pap | max.block.ms = 60000 policy-pap | max.in.flight.requests.per.connection = 5 policy-pap | max.request.size = 1048576 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.max.idle.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partitioner.adaptive.partitioning.enable = true policy-pap | partitioner.availability.timeout.ms = 0 policy-pap | partitioner.class = null policy-pap | partitioner.ignore.keys = false policy-pap | receive.buffer.bytes = 32768 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retries = 2147483647 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | transaction.timeout.ms = 60000 policy-pap | transactional.id = null policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | policy-pap | [2025-06-16T14:56:50.960+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-16T14:56:50.961+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. policy-pap | [2025-06-16T14:56:50.964+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-16T14:56:50.965+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-16T14:56:50.965+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750085810964 policy-pap | [2025-06-16T14:56:50.965+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=b32b18a6-f565-4adc-ba36-dc6d685b61e8, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-pap | [2025-06-16T14:56:50.965+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator policy-pap | [2025-06-16T14:56:50.965+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher policy-pap | [2025-06-16T14:56:50.970+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher policy-pap | [2025-06-16T14:56:50.970+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers policy-pap | [2025-06-16T14:56:50.972+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers policy-pap | [2025-06-16T14:56:50.972+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock policy-pap | [2025-06-16T14:56:50.972+00:00|INFO|TimerManager|Thread-9] timer manager update started policy-pap | [2025-06-16T14:56:50.972+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests policy-pap | [2025-06-16T14:56:50.973+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer policy-pap | [2025-06-16T14:56:50.973+00:00|INFO|TimerManager|Thread-10] timer manager state-change started policy-pap | [2025-06-16T14:56:50.973+00:00|INFO|ServiceManager|main] Policy PAP started policy-pap | [2025-06-16T14:56:50.974+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 9.533 seconds (process running for 10.076) policy-pap | [2025-06-16T14:56:51.388+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 3 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-pap | [2025-06-16T14:56:51.390+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: RGdXTQKZTVW282RMUqCsFg policy-pap | [2025-06-16T14:56:51.390+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: RGdXTQKZTVW282RMUqCsFg policy-pap | [2025-06-16T14:56:51.390+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: RGdXTQKZTVW282RMUqCsFg policy-pap | [2025-06-16T14:56:51.416+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 0 with epoch 0 policy-pap | [2025-06-16T14:56:51.417+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 1 with epoch 0 policy-pap | [2025-06-16T14:56:51.433+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-f9da0d51-d8fe-4efe-818b-a9b7c652cdb2-3, groupId=f9da0d51-d8fe-4efe-818b-a9b7c652cdb2] The metadata response from the cluster reported a recoverable issue with correlation id 3 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-16T14:56:51.433+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-f9da0d51-d8fe-4efe-818b-a9b7c652cdb2-3, groupId=f9da0d51-d8fe-4efe-818b-a9b7c652cdb2] Cluster ID: RGdXTQKZTVW282RMUqCsFg policy-pap | [2025-06-16T14:56:51.557+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-f9da0d51-d8fe-4efe-818b-a9b7c652cdb2-3, groupId=f9da0d51-d8fe-4efe-818b-a9b7c652cdb2] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-pap | [2025-06-16T14:56:51.562+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-16T14:56:51.776+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-f9da0d51-d8fe-4efe-818b-a9b7c652cdb2-3, groupId=f9da0d51-d8fe-4efe-818b-a9b7c652cdb2] The metadata response from the cluster reported a recoverable issue with correlation id 9 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-16T14:56:51.794+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 9 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-16T14:56:52.183+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 11 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-16T14:56:52.270+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-f9da0d51-d8fe-4efe-818b-a9b7c652cdb2-3, groupId=f9da0d51-d8fe-4efe-818b-a9b7c652cdb2] The metadata response from the cluster reported a recoverable issue with correlation id 11 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-16T14:56:52.915+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-pap | [2025-06-16T14:56:52.921+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-pap | [2025-06-16T14:56:52.954+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-4256b2a0-7699-4466-9093-217f2ec55f1a policy-pap | [2025-06-16T14:56:52.954+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-pap | [2025-06-16T14:56:53.216+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-f9da0d51-d8fe-4efe-818b-a9b7c652cdb2-3, groupId=f9da0d51-d8fe-4efe-818b-a9b7c652cdb2] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-pap | [2025-06-16T14:56:53.218+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-f9da0d51-d8fe-4efe-818b-a9b7c652cdb2-3, groupId=f9da0d51-d8fe-4efe-818b-a9b7c652cdb2] (Re-)joining group policy-pap | [2025-06-16T14:56:53.223+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-f9da0d51-d8fe-4efe-818b-a9b7c652cdb2-3, groupId=f9da0d51-d8fe-4efe-818b-a9b7c652cdb2] Request joining group due to: need to re-join with the given member-id: consumer-f9da0d51-d8fe-4efe-818b-a9b7c652cdb2-3-a7782bf0-190c-47e0-b567-ada45a009874 policy-pap | [2025-06-16T14:56:53.223+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-f9da0d51-d8fe-4efe-818b-a9b7c652cdb2-3, groupId=f9da0d51-d8fe-4efe-818b-a9b7c652cdb2] (Re-)joining group policy-pap | [2025-06-16T14:56:55.977+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-4256b2a0-7699-4466-9093-217f2ec55f1a', protocol='range'} policy-pap | [2025-06-16T14:56:55.986+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-4256b2a0-7699-4466-9093-217f2ec55f1a=Assignment(partitions=[policy-pdp-pap-0])} policy-pap | [2025-06-16T14:56:56.033+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-4256b2a0-7699-4466-9093-217f2ec55f1a', protocol='range'} policy-pap | [2025-06-16T14:56:56.033+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-pap | [2025-06-16T14:56:56.037+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 policy-pap | [2025-06-16T14:56:56.051+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 policy-pap | [2025-06-16T14:56:56.064+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | [2025-06-16T14:56:56.227+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-f9da0d51-d8fe-4efe-818b-a9b7c652cdb2-3, groupId=f9da0d51-d8fe-4efe-818b-a9b7c652cdb2] Successfully joined group with generation Generation{generationId=1, memberId='consumer-f9da0d51-d8fe-4efe-818b-a9b7c652cdb2-3-a7782bf0-190c-47e0-b567-ada45a009874', protocol='range'} policy-pap | [2025-06-16T14:56:56.227+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-f9da0d51-d8fe-4efe-818b-a9b7c652cdb2-3, groupId=f9da0d51-d8fe-4efe-818b-a9b7c652cdb2] Finished assignment for group at generation 1: {consumer-f9da0d51-d8fe-4efe-818b-a9b7c652cdb2-3-a7782bf0-190c-47e0-b567-ada45a009874=Assignment(partitions=[policy-pdp-pap-0])} policy-pap | [2025-06-16T14:56:56.234+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-f9da0d51-d8fe-4efe-818b-a9b7c652cdb2-3, groupId=f9da0d51-d8fe-4efe-818b-a9b7c652cdb2] Successfully synced group in generation Generation{generationId=1, memberId='consumer-f9da0d51-d8fe-4efe-818b-a9b7c652cdb2-3-a7782bf0-190c-47e0-b567-ada45a009874', protocol='range'} policy-pap | [2025-06-16T14:56:56.234+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-f9da0d51-d8fe-4efe-818b-a9b7c652cdb2-3, groupId=f9da0d51-d8fe-4efe-818b-a9b7c652cdb2] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-pap | [2025-06-16T14:56:56.234+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-f9da0d51-d8fe-4efe-818b-a9b7c652cdb2-3, groupId=f9da0d51-d8fe-4efe-818b-a9b7c652cdb2] Adding newly assigned partitions: policy-pdp-pap-0 policy-pap | [2025-06-16T14:56:56.236+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-f9da0d51-d8fe-4efe-818b-a9b7c652cdb2-3, groupId=f9da0d51-d8fe-4efe-818b-a9b7c652cdb2] Found no committed offset for partition policy-pdp-pap-0 policy-pap | [2025-06-16T14:56:56.238+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-f9da0d51-d8fe-4efe-818b-a9b7c652cdb2-3, groupId=f9da0d51-d8fe-4efe-818b-a9b7c652cdb2] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | [2025-06-16T14:57:12.629+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-heartbeat] ***** OrderedServiceImpl implementers: policy-pap | [] policy-pap | [2025-06-16T14:57:12.630+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"f5e59203-9419-414a-a427-70fdfc484626","timestampMs":1750085832592,"name":"apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7","pdpGroup":"defaultGroup"} policy-pap | [2025-06-16T14:57:12.630+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"f5e59203-9419-414a-a427-70fdfc484626","timestampMs":1750085832592,"name":"apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7","pdpGroup":"defaultGroup"} policy-pap | [2025-06-16T14:57:12.637+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-pap | [2025-06-16T14:57:12.707+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7 PdpUpdate starting policy-pap | [2025-06-16T14:57:12.707+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7 PdpUpdate starting listener policy-pap | [2025-06-16T14:57:12.707+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7 PdpUpdate starting timer policy-pap | [2025-06-16T14:57:12.708+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=44445093-e055-41f6-8cae-1170a53c5916, expireMs=1750085862708] policy-pap | [2025-06-16T14:57:12.710+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7 PdpUpdate starting enqueue policy-pap | [2025-06-16T14:57:12.710+00:00|INFO|TimerManager|Thread-9] update timer waiting 29998ms Timer [name=44445093-e055-41f6-8cae-1170a53c5916, expireMs=1750085862708] policy-pap | [2025-06-16T14:57:12.710+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7 PdpUpdate started policy-pap | [2025-06-16T14:57:12.713+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-25244e0b-c047-452b-89c0-37e22a9824a8","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"44445093-e055-41f6-8cae-1170a53c5916","timestampMs":1750085832694,"name":"apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-16T14:57:12.748+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-25244e0b-c047-452b-89c0-37e22a9824a8","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"44445093-e055-41f6-8cae-1170a53c5916","timestampMs":1750085832694,"name":"apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-16T14:57:12.748+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-16T14:57:12.750+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-25244e0b-c047-452b-89c0-37e22a9824a8","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"44445093-e055-41f6-8cae-1170a53c5916","timestampMs":1750085832694,"name":"apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-16T14:57:12.750+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-16T14:57:12.793+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"e96c3548-fd91-4b4f-89de-8949aaced35d","timestampMs":1750085832780,"name":"apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7","pdpGroup":"defaultGroup"} policy-pap | [2025-06-16T14:57:12.793+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-pap | [2025-06-16T14:57:12.794+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"e96c3548-fd91-4b4f-89de-8949aaced35d","timestampMs":1750085832780,"name":"apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7","pdpGroup":"defaultGroup"} policy-pap | [2025-06-16T14:57:12.796+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"44445093-e055-41f6-8cae-1170a53c5916","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"37f110c8-2be7-4db1-ae5a-2dcf042cbd1d","timestampMs":1750085832783,"name":"apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-16T14:57:12.816+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7 PdpUpdate stopping policy-pap | [2025-06-16T14:57:12.816+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7 PdpUpdate stopping enqueue policy-pap | [2025-06-16T14:57:12.816+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7 PdpUpdate stopping timer policy-pap | [2025-06-16T14:57:12.816+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=44445093-e055-41f6-8cae-1170a53c5916, expireMs=1750085862708] policy-pap | [2025-06-16T14:57:12.816+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7 PdpUpdate stopping listener policy-pap | [2025-06-16T14:57:12.816+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7 PdpUpdate stopped policy-pap | [2025-06-16T14:57:12.820+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"44445093-e055-41f6-8cae-1170a53c5916","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"37f110c8-2be7-4db1-ae5a-2dcf042cbd1d","timestampMs":1750085832783,"name":"apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-16T14:57:12.821+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 44445093-e055-41f6-8cae-1170a53c5916 policy-pap | [2025-06-16T14:57:12.823+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7 PdpUpdate successful policy-pap | [2025-06-16T14:57:12.823+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7 start publishing next request policy-pap | [2025-06-16T14:57:12.823+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7 PdpStateChange starting policy-pap | [2025-06-16T14:57:12.823+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7 PdpStateChange starting listener policy-pap | [2025-06-16T14:57:12.823+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7 PdpStateChange starting timer policy-pap | [2025-06-16T14:57:12.823+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=13d54e33-08ac-4c07-883b-b0a1aca6a7f1, expireMs=1750085862823] policy-pap | [2025-06-16T14:57:12.823+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7 PdpStateChange starting enqueue policy-pap | [2025-06-16T14:57:12.823+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7 PdpStateChange started policy-pap | [2025-06-16T14:57:12.823+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 30000ms Timer [name=13d54e33-08ac-4c07-883b-b0a1aca6a7f1, expireMs=1750085862823] policy-pap | [2025-06-16T14:57:12.824+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-25244e0b-c047-452b-89c0-37e22a9824a8","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"13d54e33-08ac-4c07-883b-b0a1aca6a7f1","timestampMs":1750085832694,"name":"apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-16T14:57:12.834+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-25244e0b-c047-452b-89c0-37e22a9824a8","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"13d54e33-08ac-4c07-883b-b0a1aca6a7f1","timestampMs":1750085832694,"name":"apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-16T14:57:12.834+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE policy-pap | [2025-06-16T14:57:12.843+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"13d54e33-08ac-4c07-883b-b0a1aca6a7f1","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"eba19440-06b0-4538-88c0-2e0c36b5d8e5","timestampMs":1750085832836,"name":"apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-16T14:57:12.843+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 13d54e33-08ac-4c07-883b-b0a1aca6a7f1 policy-pap | [2025-06-16T14:57:12.855+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-25244e0b-c047-452b-89c0-37e22a9824a8","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"13d54e33-08ac-4c07-883b-b0a1aca6a7f1","timestampMs":1750085832694,"name":"apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-16T14:57:12.855+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE policy-pap | [2025-06-16T14:57:12.858+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"13d54e33-08ac-4c07-883b-b0a1aca6a7f1","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"eba19440-06b0-4538-88c0-2e0c36b5d8e5","timestampMs":1750085832836,"name":"apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-16T14:57:12.858+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7 PdpStateChange stopping policy-pap | [2025-06-16T14:57:12.858+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7 PdpStateChange stopping enqueue policy-pap | [2025-06-16T14:57:12.858+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7 PdpStateChange stopping timer policy-pap | [2025-06-16T14:57:12.858+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=13d54e33-08ac-4c07-883b-b0a1aca6a7f1, expireMs=1750085862823] policy-pap | [2025-06-16T14:57:12.859+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7 PdpStateChange stopping listener policy-pap | [2025-06-16T14:57:12.859+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7 PdpStateChange stopped policy-pap | [2025-06-16T14:57:12.859+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7 PdpStateChange successful policy-pap | [2025-06-16T14:57:12.859+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7 start publishing next request policy-pap | [2025-06-16T14:57:12.859+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7 PdpUpdate starting policy-pap | [2025-06-16T14:57:12.859+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7 PdpUpdate starting listener policy-pap | [2025-06-16T14:57:12.859+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7 PdpUpdate starting timer policy-pap | [2025-06-16T14:57:12.859+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=807f3e4b-cf4b-4b69-8a31-82c20f855518, expireMs=1750085862859] policy-pap | [2025-06-16T14:57:12.859+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7 PdpUpdate starting enqueue policy-pap | [2025-06-16T14:57:12.859+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7 PdpUpdate started policy-pap | [2025-06-16T14:57:12.860+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-25244e0b-c047-452b-89c0-37e22a9824a8","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"807f3e4b-cf4b-4b69-8a31-82c20f855518","timestampMs":1750085832849,"name":"apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-16T14:57:12.869+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-25244e0b-c047-452b-89c0-37e22a9824a8","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"807f3e4b-cf4b-4b69-8a31-82c20f855518","timestampMs":1750085832849,"name":"apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-16T14:57:12.869+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-16T14:57:12.871+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-25244e0b-c047-452b-89c0-37e22a9824a8","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"807f3e4b-cf4b-4b69-8a31-82c20f855518","timestampMs":1750085832849,"name":"apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-16T14:57:12.871+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-16T14:57:12.877+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"807f3e4b-cf4b-4b69-8a31-82c20f855518","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"1ff2e457-3ca4-4833-a933-379eaf279048","timestampMs":1750085832871,"name":"apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-16T14:57:12.878+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 807f3e4b-cf4b-4b69-8a31-82c20f855518 policy-pap | [2025-06-16T14:57:12.880+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"807f3e4b-cf4b-4b69-8a31-82c20f855518","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"1ff2e457-3ca4-4833-a933-379eaf279048","timestampMs":1750085832871,"name":"apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-16T14:57:12.880+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7 PdpUpdate stopping policy-pap | [2025-06-16T14:57:12.880+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7 PdpUpdate stopping enqueue policy-pap | [2025-06-16T14:57:12.880+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7 PdpUpdate stopping timer policy-pap | [2025-06-16T14:57:12.880+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=807f3e4b-cf4b-4b69-8a31-82c20f855518, expireMs=1750085862859] policy-pap | [2025-06-16T14:57:12.880+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7 PdpUpdate stopping listener policy-pap | [2025-06-16T14:57:12.880+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7 PdpUpdate stopped policy-pap | [2025-06-16T14:57:12.885+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7 PdpUpdate successful policy-pap | [2025-06-16T14:57:12.885+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7 has no more requests policy-pap | [2025-06-16T14:57:41.590+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-3] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-pap | [2025-06-16T14:57:41.591+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Initializing Servlet 'dispatcherServlet' policy-pap | [2025-06-16T14:57:41.593+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Completed initialization in 2 ms policy-pap | [2025-06-16T14:57:42.708+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=44445093-e055-41f6-8cae-1170a53c5916, expireMs=1750085862708] policy-pap | [2025-06-16T14:57:42.823+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=13d54e33-08ac-4c07-883b-b0a1aca6a7f1, expireMs=1750085862823] policy-pap | [2025-06-16T14:58:47.394+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls policy-pap | [2025-06-16T14:58:47.402+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls policy-pap | [2025-06-16T14:58:47.774+00:00|INFO|SessionData|http-nio-6969-exec-8] unknown group testGroup policy-pap | [2025-06-16T14:58:48.371+00:00|INFO|SessionData|http-nio-6969-exec-8] create cached group testGroup policy-pap | [2025-06-16T14:58:48.371+00:00|INFO|SessionData|http-nio-6969-exec-8] creating DB group testGroup policy-pap | [2025-06-16T14:58:48.851+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup policy-pap | [2025-06-16T14:58:49.119+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] Registering a deploy for policy onap.restart.tca 1.0.0 policy-pap | [2025-06-16T14:58:49.205+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] Registering a deploy for policy operational.apex.decisionMaker 1.0.0 policy-pap | [2025-06-16T14:58:49.205+00:00|INFO|SessionData|http-nio-6969-exec-1] update cached group testGroup policy-pap | [2025-06-16T14:58:49.205+00:00|INFO|SessionData|http-nio-6969-exec-1] updating DB group testGroup policy-pap | [2025-06-16T14:58:49.217+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2025-06-16T14:58:49Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=DEPLOYMENT, timestamp=2025-06-16T14:58:49Z, user=policyadmin)] policy-pap | [2025-06-16T14:58:49.910+00:00|INFO|SessionData|http-nio-6969-exec-7] cache group testGroup policy-pap | [2025-06-16T14:58:49.911+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-7] remove policy onap.restart.tca 1.0.0 from subgroup testGroup pdpTypeA count=0 policy-pap | [2025-06-16T14:58:49.911+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-7] Registering an undeploy for policy onap.restart.tca 1.0.0 policy-pap | [2025-06-16T14:58:49.912+00:00|INFO|SessionData|http-nio-6969-exec-7] update cached group testGroup policy-pap | [2025-06-16T14:58:49.912+00:00|INFO|SessionData|http-nio-6969-exec-7] updating DB group testGroup policy-pap | [2025-06-16T14:58:49.923+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-7] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2025-06-16T14:58:49Z, user=policyadmin)] policy-pap | [2025-06-16T14:58:50.267+00:00|INFO|SessionData|http-nio-6969-exec-6] cache group defaultGroup policy-pap | [2025-06-16T14:58:50.267+00:00|INFO|SessionData|http-nio-6969-exec-6] cache group testGroup policy-pap | [2025-06-16T14:58:50.267+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-6] remove policy operational.apex.decisionMaker 1.0.0 from subgroup testGroup pdpTypeC count=0 policy-pap | [2025-06-16T14:58:50.268+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] Registering an undeploy for policy operational.apex.decisionMaker 1.0.0 policy-pap | [2025-06-16T14:58:50.268+00:00|INFO|SessionData|http-nio-6969-exec-6] update cached group testGroup policy-pap | [2025-06-16T14:58:50.268+00:00|INFO|SessionData|http-nio-6969-exec-6] updating DB group testGroup policy-pap | [2025-06-16T14:58:50.277+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=UNDEPLOYMENT, timestamp=2025-06-16T14:58:50Z, user=policyadmin)] policy-pap | [2025-06-16T14:58:50.811+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup policy-pap | [2025-06-16T14:58:50.813+00:00|INFO|SessionData|http-nio-6969-exec-1] deleting DB group testGroup policy-pap | [2025-06-16T14:58:51.168+00:00|INFO|PdpModifyRequestMap|pool-3-thread-1] check for PDP records older than 360000ms policy-pap | [2025-06-16T14:59:12.791+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","policies":[],"messageName":"PDP_STATUS","requestId":"e442dcc1-36f4-4b66-8f96-77dae1d038d1","timestampMs":1750085952781,"name":"apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-16T14:59:12.795+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","policies":[],"messageName":"PDP_STATUS","requestId":"e442dcc1-36f4-4b66-8f96-77dae1d038d1","timestampMs":1750085952781,"name":"apex-121e4f3a-f38d-4bba-82c1-aa91fe64eef7","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-16T14:59:12.796+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus postgres | The files belonging to this database system will be owned by user "postgres". postgres | This user must also own the server process. postgres | postgres | The database cluster will be initialized with locale "en_US.utf8". postgres | The default database encoding has accordingly been set to "UTF8". postgres | The default text search configuration will be set to "english". postgres | postgres | Data page checksums are disabled. postgres | postgres | fixing permissions on existing directory /var/lib/postgresql/data ... ok postgres | creating subdirectories ... ok postgres | selecting dynamic shared memory implementation ... posix postgres | selecting default max_connections ... 100 postgres | selecting default shared_buffers ... 128MB postgres | selecting default time zone ... Etc/UTC postgres | creating configuration files ... ok postgres | running bootstrap script ... ok postgres | performing post-bootstrap initialization ... ok postgres | syncing data to disk ... ok postgres | postgres | postgres | Success. You can now start the database server using: postgres | postgres | pg_ctl -D /var/lib/postgresql/data -l logfile start postgres | postgres | initdb: warning: enabling "trust" authentication for local connections postgres | initdb: hint: You can change this by editing pg_hba.conf or using the option -A, or --auth-local and --auth-host, the next time you run initdb. postgres | waiting for server to start....2025-06-16 14:56:15.451 UTC [47] LOG: starting PostgreSQL 16.4 (Debian 16.4-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit postgres | 2025-06-16 14:56:15.453 UTC [47] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" postgres | 2025-06-16 14:56:15.458 UTC [50] LOG: database system was shut down at 2025-06-16 14:56:15 UTC postgres | 2025-06-16 14:56:15.462 UTC [47] LOG: database system is ready to accept connections postgres | done postgres | server started postgres | postgres | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db-pg.conf postgres | postgres | /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db-pg.sh postgres | #!/bin/bash -xv postgres | # Copyright (C) 2022, 2024 Nordix Foundation. All rights reserved postgres | # postgres | # Licensed under the Apache License, Version 2.0 (the "License"); postgres | # you may not use this file except in compliance with the License. postgres | # You may obtain a copy of the License at postgres | # postgres | # http://www.apache.org/licenses/LICENSE-2.0 postgres | # postgres | # Unless required by applicable law or agreed to in writing, software postgres | # distributed under the License is distributed on an "AS IS" BASIS, postgres | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. postgres | # See the License for the specific language governing permissions and postgres | # limitations under the License. postgres | postgres | psql -U postgres -d postgres --command "CREATE USER ${PGSQL_USER} WITH PASSWORD '${PGSQL_PASSWORD}';" postgres | + psql -U postgres -d postgres --command 'CREATE USER policy_user WITH PASSWORD '\''policy_user'\'';' postgres | CREATE ROLE postgres | postgres | for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | do postgres | psql -U postgres -d postgres --command "CREATE DATABASE ${db};" postgres | psql -U postgres -d postgres --command "ALTER DATABASE ${db} OWNER TO ${PGSQL_USER} ;" postgres | psql -U postgres -d postgres --command "GRANT ALL PRIVILEGES ON DATABASE ${db} TO ${PGSQL_USER} ;" postgres | done postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE migration;' postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE migration OWNER TO policy_user ;' postgres | CREATE DATABASE postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE migration TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE pooling;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE pooling OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE pooling TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE policyadmin;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE policyadmin OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE policyadmin TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE policyclamp;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE policyclamp OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE policyclamp TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE operationshistory;' postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE operationshistory OWNER TO policy_user ;' postgres | CREATE DATABASE postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE operationshistory TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE clampacm;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE clampacm OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE clampacm TO policy_user ;' postgres | GRANT postgres | postgres | 2025-06-16 14:56:16.829 UTC [47] LOG: received fast shutdown request postgres | waiting for server to shut down....2025-06-16 14:56:16.832 UTC [47] LOG: aborting any active transactions postgres | 2025-06-16 14:56:16.833 UTC [47] LOG: background worker "logical replication launcher" (PID 53) exited with exit code 1 postgres | 2025-06-16 14:56:16.836 UTC [48] LOG: shutting down postgres | 2025-06-16 14:56:16.838 UTC [48] LOG: checkpoint starting: shutdown immediate postgres | 2025-06-16 14:56:17.192 UTC [48] LOG: checkpoint complete: wrote 5511 buffers (33.6%); 0 WAL file(s) added, 0 removed, 1 recycled; write=0.280 s, sync=0.068 s, total=0.356 s; sync files=1788, longest=0.005 s, average=0.001 s; distance=25535 kB, estimate=25535 kB; lsn=0/2DDA218, redo lsn=0/2DDA218 postgres | 2025-06-16 14:56:17.203 UTC [47] LOG: database system is shut down postgres | done postgres | server stopped postgres | postgres | PostgreSQL init process complete; ready for start up. postgres | postgres | 2025-06-16 14:56:17.253 UTC [1] LOG: starting PostgreSQL 16.4 (Debian 16.4-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit postgres | 2025-06-16 14:56:17.253 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 postgres | 2025-06-16 14:56:17.253 UTC [1] LOG: listening on IPv6 address "::", port 5432 postgres | 2025-06-16 14:56:17.256 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" postgres | 2025-06-16 14:56:17.262 UTC [100] LOG: database system was shut down at 2025-06-16 14:56:17 UTC postgres | 2025-06-16 14:56:17.266 UTC [1] LOG: database system is ready to accept connections prometheus | time=2025-06-16T14:56:17.718Z level=INFO source=main.go:674 msg="No time or size retention was set so using the default time retention" duration=15d prometheus | time=2025-06-16T14:56:17.718Z level=INFO source=main.go:725 msg="Starting Prometheus Server" mode=server version="(version=3.4.1, branch=HEAD, revision=aea6503d9bbaad6c5faff3ecf6f1025213356c92)" prometheus | time=2025-06-16T14:56:17.718Z level=INFO source=main.go:730 msg="operational information" build_context="(go=go1.24.3, platform=linux/amd64, user=root@16f976c24db1, date=20250531-10:44:38, tags=netgo,builtinassets,stringlabels)" host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" fd_limits="(soft=1048576, hard=1048576)" vm_limits="(soft=unlimited, hard=unlimited)" prometheus | time=2025-06-16T14:56:17.719Z level=INFO source=main.go:806 msg="Leaving GOMAXPROCS=8: CPU quota undefined" component=automaxprocs prometheus | time=2025-06-16T14:56:17.722Z level=INFO source=web.go:656 msg="Start listening for connections" component=web address=0.0.0.0:9090 prometheus | time=2025-06-16T14:56:17.727Z level=INFO source=main.go:1266 msg="Starting TSDB ..." prometheus | time=2025-06-16T14:56:17.729Z level=INFO source=tls_config.go:347 msg="Listening on" component=web address=[::]:9090 prometheus | time=2025-06-16T14:56:17.729Z level=INFO source=tls_config.go:350 msg="TLS is disabled." component=web http2=false address=[::]:9090 prometheus | time=2025-06-16T14:56:17.735Z level=INFO source=head.go:657 msg="Replaying on-disk memory mappable chunks if any" component=tsdb prometheus | time=2025-06-16T14:56:17.735Z level=INFO source=head.go:744 msg="On-disk memory mappable chunks replay completed" component=tsdb duration=1.17µs prometheus | time=2025-06-16T14:56:17.735Z level=INFO source=head.go:752 msg="Replaying WAL, this may take a while" component=tsdb prometheus | time=2025-06-16T14:56:17.736Z level=INFO source=head.go:825 msg="WAL segment loaded" component=tsdb segment=0 maxSegment=0 duration=256.253µs prometheus | time=2025-06-16T14:56:17.736Z level=INFO source=head.go:862 msg="WAL replay completed" component=tsdb checkpoint_replay_duration=46.081µs wal_replay_duration=283.253µs wbl_replay_duration=200ns chunk_snapshot_load_duration=0s mmap_chunk_replay_duration=1.17µs total_replay_duration=462.195µs prometheus | time=2025-06-16T14:56:17.738Z level=INFO source=main.go:1287 msg="filesystem information" fs_type=EXT4_SUPER_MAGIC prometheus | time=2025-06-16T14:56:17.738Z level=INFO source=main.go:1290 msg="TSDB started" prometheus | time=2025-06-16T14:56:17.738Z level=INFO source=main.go:1475 msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml prometheus | time=2025-06-16T14:56:17.740Z level=INFO source=main.go:1514 msg="updated GOGC" old=100 new=75 prometheus | time=2025-06-16T14:56:17.740Z level=INFO source=main.go:1524 msg="Completed loading of configuration file" db_storage=1.96µs remote_storage=2.08µs web_handler=1.19µs query_engine=1.28µs scrape=199.202µs scrape_sd=152.972µs notify=117.552µs notify_sd=27.16µs rules=1.84µs tracing=5.85µs filename=/etc/prometheus/prometheus.yml totalDuration=1.164903ms prometheus | time=2025-06-16T14:56:17.740Z level=INFO source=main.go:1251 msg="Server is ready to receive web requests." prometheus | time=2025-06-16T14:56:17.740Z level=INFO source=manager.go:175 msg="Starting rule manager..." component="rule manager" simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json simulator | overriding logback.xml simulator | 2025-06-16 14:56:13,895 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json simulator | 2025-06-16 14:56:13,953 INFO org.onap.policy.models.simulators starting simulator | 2025-06-16 14:56:13,953 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties simulator | 2025-06-16 14:56:14,171 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION simulator | 2025-06-16 14:56:14,173 INFO org.onap.policy.models.simulators starting A&AI simulator simulator | 2025-06-16 14:56:14,375 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-13c3c1e1==org.glassfish.jersey.servlet.ServletContainer@f0de2a7a{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@30f5a68a{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@24d4d7c9{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@f0e995e{STOPPED}}, connector=A&AI simulator@4c37b5b{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-13c3c1e1==org.glassfish.jersey.servlet.ServletContainer@f0de2a7a{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: WAITED-START simulator | 2025-06-16 14:56:14,389 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-13c3c1e1==org.glassfish.jersey.servlet.ServletContainer@f0de2a7a{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@30f5a68a{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@24d4d7c9{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@f0e995e{STOPPED}}, connector=A&AI simulator@4c37b5b{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-13c3c1e1==org.glassfish.jersey.servlet.ServletContainer@f0de2a7a{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: STARTING simulator | 2025-06-16 14:56:14,391 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-13c3c1e1==org.glassfish.jersey.servlet.ServletContainer@f0de2a7a{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@30f5a68a{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@24d4d7c9{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@f0e995e{STOPPED}}, connector=A&AI simulator@4c37b5b{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-13c3c1e1==org.glassfish.jersey.servlet.ServletContainer@f0de2a7a{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: RUN simulator | 2025-06-16 14:56:14,399 INFO jetty-12.0.21; built: 2025-05-09T00:32:00.688Z; git: 1c4719601e31b05b7d68910d2edd980259f1f53c; jvm 17.0.15+6-alpine-r0 simulator | 2025-06-16 14:56:14,445 INFO Session workerName=node0 simulator | 2025-06-16 14:56:14,459 INFO Started oeje10s.ServletContextHandler@24d4d7c9{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@f0e995e{STARTED}} simulator | 2025-06-16 14:56:15,117 INFO Using GSON for REST calls simulator | 2025-06-16 14:56:15,179 INFO Started oeje10s.ServletContextHandler@24d4d7c9{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@f0e995e{STARTED}} simulator | 2025-06-16 14:56:15,184 INFO Started A&AI simulator@4c37b5b{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} simulator | 2025-06-16 14:56:15,185 INFO Started oejs.Server@30f5a68a{STARTING}[12.0.21,sto=0] @1764ms simulator | 2025-06-16 14:56:15,186 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-13c3c1e1==org.glassfish.jersey.servlet.ServletContainer@f0de2a7a{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@30f5a68a{STARTED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@24d4d7c9{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@f0e995e{STARTED}}, connector=A&AI simulator@4c37b5b{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-13c3c1e1==org.glassfish.jersey.servlet.ServletContainer@f0de2a7a{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:,STARTED}})]: pending time is 4206 ms. simulator | 2025-06-16 14:56:15,190 INFO org.onap.policy.models.simulators starting SDNC simulator simulator | 2025-06-16 14:56:15,192 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-3829ac1==org.glassfish.jersey.servlet.ServletContainer@5ea3d315{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@4baf352a{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@1bb1fde8{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@15eebbff{STOPPED}}, connector=SDNC simulator@22d6f11{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3829ac1==org.glassfish.jersey.servlet.ServletContainer@5ea3d315{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: WAITED-START simulator | 2025-06-16 14:56:15,193 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-3829ac1==org.glassfish.jersey.servlet.ServletContainer@5ea3d315{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@4baf352a{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@1bb1fde8{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@15eebbff{STOPPED}}, connector=SDNC simulator@22d6f11{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3829ac1==org.glassfish.jersey.servlet.ServletContainer@5ea3d315{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: STARTING simulator | 2025-06-16 14:56:15,194 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-3829ac1==org.glassfish.jersey.servlet.ServletContainer@5ea3d315{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@4baf352a{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@1bb1fde8{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@15eebbff{STOPPED}}, connector=SDNC simulator@22d6f11{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3829ac1==org.glassfish.jersey.servlet.ServletContainer@5ea3d315{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: RUN simulator | 2025-06-16 14:56:15,195 INFO jetty-12.0.21; built: 2025-05-09T00:32:00.688Z; git: 1c4719601e31b05b7d68910d2edd980259f1f53c; jvm 17.0.15+6-alpine-r0 simulator | 2025-06-16 14:56:15,203 INFO Session workerName=node0 simulator | 2025-06-16 14:56:15,205 INFO Started oeje10s.ServletContextHandler@1bb1fde8{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@15eebbff{STARTED}} simulator | 2025-06-16 14:56:15,254 INFO Using GSON for REST calls simulator | 2025-06-16 14:56:15,264 INFO Started oeje10s.ServletContextHandler@1bb1fde8{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@15eebbff{STARTED}} simulator | 2025-06-16 14:56:15,267 INFO Started SDNC simulator@22d6f11{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} simulator | 2025-06-16 14:56:15,267 INFO Started oejs.Server@4baf352a{STARTING}[12.0.21,sto=0] @1846ms simulator | 2025-06-16 14:56:15,267 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-3829ac1==org.glassfish.jersey.servlet.ServletContainer@5ea3d315{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@4baf352a{STARTED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@1bb1fde8{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@15eebbff{STARTED}}, connector=SDNC simulator@22d6f11{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3829ac1==org.glassfish.jersey.servlet.ServletContainer@5ea3d315{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:,STARTED}})]: pending time is 4927 ms. simulator | 2025-06-16 14:56:15,269 INFO org.onap.policy.models.simulators starting SO simulator simulator | 2025-06-16 14:56:15,272 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-2dbe250d==org.glassfish.jersey.servlet.ServletContainer@79c62ffa{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@553f1d75{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@6e1d8f9e{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@3e34ace1{STOPPED}}, connector=SO simulator@62fe6067{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2dbe250d==org.glassfish.jersey.servlet.ServletContainer@79c62ffa{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: WAITED-START simulator | 2025-06-16 14:56:15,273 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-2dbe250d==org.glassfish.jersey.servlet.ServletContainer@79c62ffa{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@553f1d75{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@6e1d8f9e{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@3e34ace1{STOPPED}}, connector=SO simulator@62fe6067{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2dbe250d==org.glassfish.jersey.servlet.ServletContainer@79c62ffa{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: STARTING simulator | 2025-06-16 14:56:15,277 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-2dbe250d==org.glassfish.jersey.servlet.ServletContainer@79c62ffa{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@553f1d75{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@6e1d8f9e{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@3e34ace1{STOPPED}}, connector=SO simulator@62fe6067{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2dbe250d==org.glassfish.jersey.servlet.ServletContainer@79c62ffa{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: RUN simulator | 2025-06-16 14:56:15,280 INFO jetty-12.0.21; built: 2025-05-09T00:32:00.688Z; git: 1c4719601e31b05b7d68910d2edd980259f1f53c; jvm 17.0.15+6-alpine-r0 simulator | 2025-06-16 14:56:15,297 INFO Session workerName=node0 simulator | 2025-06-16 14:56:15,299 INFO Started oeje10s.ServletContextHandler@6e1d8f9e{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@3e34ace1{STARTED}} simulator | 2025-06-16 14:56:15,359 INFO Using GSON for REST calls simulator | 2025-06-16 14:56:15,378 INFO Started oeje10s.ServletContextHandler@6e1d8f9e{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@3e34ace1{STARTED}} simulator | 2025-06-16 14:56:15,385 INFO Started SO simulator@62fe6067{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} simulator | 2025-06-16 14:56:15,385 INFO Started oejs.Server@553f1d75{STARTING}[12.0.21,sto=0] @1964ms simulator | 2025-06-16 14:56:15,385 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-2dbe250d==org.glassfish.jersey.servlet.ServletContainer@79c62ffa{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@553f1d75{STARTED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@6e1d8f9e{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@3e34ace1{STARTED}}, connector=SO simulator@62fe6067{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2dbe250d==org.glassfish.jersey.servlet.ServletContainer@79c62ffa{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:,STARTED}})]: pending time is 4890 ms. simulator | 2025-06-16 14:56:15,386 INFO org.onap.policy.models.simulators started zookeeper | ===> User zookeeper | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) zookeeper | ===> Configuring ... zookeeper | ===> Running preflight checks ... zookeeper | ===> Check if /var/lib/zookeeper/data is writable ... zookeeper | ===> Check if /var/lib/zookeeper/log is writable ... zookeeper | ===> Launching ... zookeeper | ===> Launching zookeeper ... zookeeper | [2025-06-16 14:56:15,614] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-16 14:56:15,617] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-16 14:56:15,617] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-16 14:56:15,617] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-16 14:56:15,617] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-16 14:56:15,618] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2025-06-16 14:56:15,618] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2025-06-16 14:56:15,618] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2025-06-16 14:56:15,618] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) zookeeper | [2025-06-16 14:56:15,619] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) zookeeper | [2025-06-16 14:56:15,620] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-16 14:56:15,620] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-16 14:56:15,620] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-16 14:56:15,620] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-16 14:56:15,620] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-16 14:56:15,620] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) zookeeper | [2025-06-16 14:56:15,630] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@3bbc39f8 (org.apache.zookeeper.server.ServerMetrics) zookeeper | [2025-06-16 14:56:15,632] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper | [2025-06-16 14:56:15,632] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper | [2025-06-16 14:56:15,634] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2025-06-16 14:56:15,641] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 14:56:15,641] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 14:56:15,641] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 14:56:15,641] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 14:56:15,641] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 14:56:15,641] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 14:56:15,641] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 14:56:15,641] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 14:56:15,641] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 14:56:15,641] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 14:56:15,642] INFO Server environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 14:56:15,642] INFO Server environment:host.name=zookeeper (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 14:56:15,642] INFO Server environment:java.version=17.0.14 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 14:56:15,642] INFO Server environment:java.vendor=Eclipse Adoptium (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 14:56:15,642] INFO Server environment:java.home=/usr/lib/jvm/temurin-17-jre (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 14:56:15,643] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/kafka-streams-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-transaction-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/connect-transforms-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-clients-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/connect-mirror-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.16.2.jar:/usr/bin/../share/java/kafka/scala-library-2.13.15.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.118.Final.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.16.2.jar:/usr/bin/../share/java/kafka/connect-runtime-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-afterburner-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/protobuf-java-3.25.5.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/maven-artifact-3.9.6.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/trogdor-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-server-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.15.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/commons-lang3-3.12.0.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.118.Final.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.6-4.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-tools-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jackson-databind-2.16.2.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/opentelemetry-proto-1.0.0-alpha.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-json-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.16.2.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/kafka-raft-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/plexus-utils-3.5.1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/commons-io-2.14.0.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.16.2.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/jackson-core-2.16.2.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.118.Final.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-storage-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.57.v20241219.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 14:56:15,643] INFO Server environment:java.library.path=/usr/local/lib64:/usr/local/lib::/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 14:56:15,643] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 14:56:15,643] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 14:56:15,643] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 14:56:15,643] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 14:56:15,643] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 14:56:15,643] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 14:56:15,643] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 14:56:15,643] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 14:56:15,643] INFO Server environment:os.memory.free=495MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 14:56:15,643] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 14:56:15,643] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 14:56:15,643] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 14:56:15,643] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 14:56:15,643] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 14:56:15,643] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 14:56:15,643] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 14:56:15,643] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 14:56:15,643] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 14:56:15,644] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) zookeeper | [2025-06-16 14:56:15,645] INFO minSessionTimeout set to 6000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 14:56:15,645] INFO maxSessionTimeout set to 60000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 14:56:15,646] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper | [2025-06-16 14:56:15,646] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper | [2025-06-16 14:56:15,647] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-16 14:56:15,647] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-16 14:56:15,647] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-16 14:56:15,647] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-16 14:56:15,647] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-16 14:56:15,647] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-16 14:56:15,648] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 14:56:15,649] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 14:56:15,649] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) zookeeper | [2025-06-16 14:56:15,649] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) zookeeper | [2025-06-16 14:56:15,649] INFO Created server with tickTime 3000 ms minSessionTimeout 6000 ms maxSessionTimeout 60000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 14:56:15,670] INFO Logging initialized @406ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) zookeeper | [2025-06-16 14:56:15,731] WARN o.e.j.s.ServletContextHandler@6150c3ec{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2025-06-16 14:56:15,731] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2025-06-16 14:56:15,747] INFO jetty-9.4.57.v20241219; built: 2025-01-08T21:24:30.412Z; git: df524e6b29271c2e09ba9aea83c18dc9db464a31; jvm 17.0.14+7 (org.eclipse.jetty.server.Server) zookeeper | [2025-06-16 14:56:15,790] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) zookeeper | [2025-06-16 14:56:15,790] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) zookeeper | [2025-06-16 14:56:15,791] INFO node0 Scavenging every 600000ms (org.eclipse.jetty.server.session) zookeeper | [2025-06-16 14:56:15,794] WARN ServletContext@o.e.j.s.ServletContextHandler@6150c3ec{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) zookeeper | [2025-06-16 14:56:15,802] INFO Started o.e.j.s.ServletContextHandler@6150c3ec{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2025-06-16 14:56:15,811] INFO Started ServerConnector@222545dc{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) zookeeper | [2025-06-16 14:56:15,811] INFO Started @551ms (org.eclipse.jetty.server.Server) zookeeper | [2025-06-16 14:56:15,811] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) zookeeper | [2025-06-16 14:56:15,814] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper | [2025-06-16 14:56:15,815] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper | [2025-06-16 14:56:15,815] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper | [2025-06-16 14:56:15,816] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper | [2025-06-16 14:56:15,826] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper | [2025-06-16 14:56:15,826] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper | [2025-06-16 14:56:15,826] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2025-06-16 14:56:15,826] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2025-06-16 14:56:15,830] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) zookeeper | [2025-06-16 14:56:15,830] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2025-06-16 14:56:15,833] INFO Snapshot loaded in 6 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2025-06-16 14:56:15,833] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2025-06-16 14:56:15,834] INFO Snapshot taken in 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 14:56:15,840] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) zookeeper | [2025-06-16 14:56:15,844] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) zookeeper | [2025-06-16 14:56:15,852] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) zookeeper | [2025-06-16 14:56:15,852] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) zookeeper | [2025-06-16 14:56:16,651] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) Tearing down containers... Container policy-apex-pdp Stopping Container grafana Stopping Container policy-csit Stopping Container policy-csit Stopped Container policy-csit Removing Container policy-csit Removed Container grafana Stopped Container grafana Removing Container grafana Removed Container prometheus Stopping Container prometheus Stopped Container prometheus Removing Container prometheus Removed Container policy-apex-pdp Stopped Container policy-apex-pdp Removing Container policy-apex-pdp Removed Container policy-pap Stopping Container simulator Stopping Container simulator Stopped Container simulator Removing Container simulator Removed Container policy-pap Stopped Container policy-pap Removing Container policy-pap Removed Container policy-api Stopping Container kafka Stopping Container kafka Stopped Container kafka Removing Container kafka Removed Container zookeeper Stopping Container zookeeper Stopped Container zookeeper Removing Container zookeeper Removed Container policy-api Stopped Container policy-api Removing Container policy-api Removed Container policy-db-migrator Stopping Container policy-db-migrator Stopped Container policy-db-migrator Removing Container policy-db-migrator Removed Container postgres Stopping Container postgres Stopped Container postgres Removing Container postgres Removed Network compose_default Removing Network compose_default Removed $ ssh-agent -k unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 2025 killed; [ssh-agent] Stopped. Robot results publisher started... INFO: Checking test criticality is deprecated and will be dropped in a future release! -Parsing output xml: Done! -Copying log files to build dir: Done! -Assigning results to build: Done! -Checking thresholds: Done! Done publishing Robot results. [PostBuildScript] - [INFO] Executing post build scripts. [policy-pap-master-project-csit-verify-pap] $ /bin/bash /tmp/jenkins4867104777980648817.sh ---> sysstat.sh [policy-pap-master-project-csit-verify-pap] $ /bin/bash /tmp/jenkins1215054250822541146.sh ---> package-listing.sh ++ tr '[:upper:]' '[:lower:]' ++ facter osfamily + OS_FAMILY=debian + workspace=/w/workspace/policy-pap-master-project-csit-verify-pap + START_PACKAGES=/tmp/packages_start.txt + END_PACKAGES=/tmp/packages_end.txt + DIFF_PACKAGES=/tmp/packages_diff.txt + PACKAGES=/tmp/packages_start.txt + '[' /w/workspace/policy-pap-master-project-csit-verify-pap ']' + PACKAGES=/tmp/packages_end.txt + case "${OS_FAMILY}" in + dpkg -l + grep '^ii' + '[' -f /tmp/packages_start.txt ']' + '[' -f /tmp/packages_end.txt ']' + diff /tmp/packages_start.txt /tmp/packages_end.txt + '[' /w/workspace/policy-pap-master-project-csit-verify-pap ']' + mkdir -p /w/workspace/policy-pap-master-project-csit-verify-pap/archives/ + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-pap-master-project-csit-verify-pap/archives/ [policy-pap-master-project-csit-verify-pap] $ /bin/bash /tmp/jenkins488875246595286521.sh ---> capture-instance-metadata.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-verify-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-DwzS from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-DwzS/bin to PATH INFO: Running in OpenStack, capturing instance metadata [policy-pap-master-project-csit-verify-pap] $ /bin/bash /tmp/jenkins3991726513625368421.sh provisioning config files... copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-pap-master-project-csit-verify-pap@tmp/config6875950004569975476tmp Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] Run condition [Regular expression match] preventing perform for step [Provide Configuration files] [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SERVER_ID=logs [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-verify-pap] $ /bin/bash /tmp/jenkins9451305339126482137.sh ---> create-netrc.sh [policy-pap-master-project-csit-verify-pap] $ /bin/bash /tmp/jenkins941852600540819203.sh ---> python-tools-install.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-verify-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-DwzS from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-DwzS/bin to PATH [policy-pap-master-project-csit-verify-pap] $ /bin/bash /tmp/jenkins8640418421034885327.sh ---> sudo-logs.sh Archiving 'sudo' log.. [policy-pap-master-project-csit-verify-pap] $ /bin/bash /tmp/jenkins12037190744365799994.sh ---> job-cost.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-verify-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-DwzS from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 lf-activate-venv(): INFO: Adding /tmp/venv-DwzS/bin to PATH INFO: No Stack... INFO: Retrieving Pricing Info for: v3-standard-8 INFO: Archiving Costs [policy-pap-master-project-csit-verify-pap] $ /bin/bash -l /tmp/jenkins1099749972718816577.sh ---> logs-deploy.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-verify-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-DwzS from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-DwzS/bin to PATH INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-pap-master-project-csit-verify-pap/817 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt Archives upload complete. INFO: archiving logs to Nexus ---> uname -a: Linux prd-ubuntu1804-docker-8c-8g-21635 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux ---> lscpu: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 8 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC-Rome Processor Stepping: 0 CPU MHz: 2800.000 BogoMIPS: 5600.00 Virtualization: AMD-V Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-7 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities ---> nproc: 8 ---> df -h: Filesystem Size Used Avail Use% Mounted on udev 16G 0 16G 0% /dev tmpfs 3.2G 708K 3.2G 1% /run /dev/vda1 155G 16G 140G 10% / tmpfs 16G 0 16G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/vda15 105M 4.4M 100M 5% /boot/efi tmpfs 3.2G 0 3.2G 0% /run/user/1001 ---> free -m: total used free shared buff/cache available Mem: 32167 873 23275 0 8018 30838 Swap: 1023 0 1023 ---> ip addr: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 link/ether fa:16:3e:fb:ce:bc brd ff:ff:ff:ff:ff:ff inet 10.30.106.251/23 brd 10.30.107.255 scope global dynamic ens3 valid_lft 85950sec preferred_lft 85950sec inet6 fe80::f816:3eff:fefb:cebc/64 scope link valid_lft forever preferred_lft forever 3: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:03:47:f6:6b brd ff:ff:ff:ff:ff:ff inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:3ff:fe47:f66b/64 scope link valid_lft forever preferred_lft forever ---> sar -b -r -n DEV: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-21635) 06/16/25 _x86_64_ (8 CPU) 14:53:42 LINUX RESTART (8 CPU) 14:54:01 tps rtps wtps bread/s bwrtn/s 14:55:01 361.42 66.06 295.37 4306.75 121806.77 14:56:01 361.36 20.15 341.21 2301.22 183670.72 14:57:01 427.30 2.63 424.66 416.06 79870.69 14:58:01 133.13 0.18 132.94 26.00 16455.92 14:59:01 93.65 0.27 93.38 18.13 17629.20 15:00:01 18.41 1.03 17.38 24.53 355.01 15:01:01 71.42 1.80 69.62 93.98 2501.32 Average: 209.53 13.16 196.37 1026.67 60327.09 14:54:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 14:55:01 30149112 31695488 2790100 8.47 68128 1790388 1403448 4.13 855256 1646052 171204 14:56:01 24495832 31652328 8443380 25.63 150236 7070352 1675460 4.93 1003900 6846304 1955564 14:57:01 22269512 29730048 10669700 32.39 165428 7358092 8510592 25.04 3149672 6833420 2308 14:58:01 21692812 29607288 11246400 34.14 187924 7744620 8764208 25.79 3354284 7150564 48652 14:59:01 21482200 29480596 11457012 34.78 206332 7802668 8939668 26.30 3497920 7211020 272 15:00:01 21694684 29651404 11244528 34.14 206584 7765888 7993812 23.52 3343220 7165908 264 15:01:01 23849064 31590140 9090148 27.60 208432 7539900 1583468 4.66 1457336 6966504 9584 Average: 23661888 30486756 9277324 28.16 170438 6724558 5552951 16.34 2380227 6259967 312550 14:54:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 14:55:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:55:01 lo 1.87 1.87 0.21 0.21 0.00 0.00 0.00 0.00 14:55:01 ens3 569.72 395.48 1708.43 86.49 0.00 0.00 0.00 0.00 14:56:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:56:01 lo 14.13 14.13 1.29 1.29 0.00 0.00 0.00 0.00 14:56:01 br-bde2ec529c02 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:56:01 ens3 1544.54 901.88 42484.90 72.06 0.00 0.00 0.00 0.00 14:57:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:57:01 veth8bfd56a 1.73 1.97 0.22 0.19 0.00 0.00 0.00 0.00 14:57:01 veth962773c 91.83 91.62 16.04 18.63 0.00 0.00 0.00 0.00 14:57:01 vethbac6ac3 4.03 5.73 0.71 0.84 0.00 0.00 0.00 0.00 14:58:01 docker0 95.18 124.85 5.48 1145.92 0.00 0.00 0.00 0.00 14:58:01 veth8bfd56a 3.92 5.52 0.67 0.50 0.00 0.00 0.00 0.00 14:58:01 veth962773c 0.20 0.25 0.55 0.02 0.00 0.00 0.00 0.00 14:58:01 vethbac6ac3 0.17 0.38 0.01 0.03 0.00 0.00 0.00 0.00 14:59:01 docker0 28.16 41.14 2.52 201.21 0.00 0.00 0.00 0.00 14:59:01 veth8bfd56a 3.17 4.67 0.52 0.36 0.00 0.00 0.00 0.00 14:59:01 veth962773c 103.02 102.48 13.34 26.40 0.00 0.00 0.00 0.00 14:59:01 vethbac6ac3 0.17 0.38 0.01 0.03 0.00 0.00 0.00 0.00 15:00:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 15:00:01 veth8bfd56a 3.25 4.78 0.54 0.38 0.00 0.00 0.00 0.00 15:00:01 veth962773c 0.43 0.43 0.59 0.03 0.00 0.00 0.00 0.00 15:00:01 vethbac6ac3 0.17 0.40 0.01 0.03 0.00 0.00 0.00 0.00 15:01:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 15:01:01 lo 28.43 28.43 2.57 2.57 0.00 0.00 0.00 0.00 15:01:01 ens3 2462.24 1552.21 46847.29 209.48 0.00 0.00 0.00 0.00 Average: docker0 17.62 23.71 1.14 192.45 0.00 0.00 0.00 0.00 Average: lo 3.59 3.59 0.33 0.33 0.00 0.00 0.00 0.00 Average: ens3 348.74 219.96 6682.19 29.74 0.00 0.00 0.00 0.00 ---> sar -P ALL: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-21635) 06/16/25 _x86_64_ (8 CPU) 14:53:42 LINUX RESTART (8 CPU) 14:54:01 CPU %user %nice %system %iowait %steal %idle 14:55:01 all 10.37 0.00 1.34 4.68 0.04 83.57 14:55:01 0 3.33 0.00 0.85 8.76 0.02 87.05 14:55:01 1 7.08 0.00 0.52 0.55 0.02 91.84 14:55:01 2 9.51 0.00 1.03 11.11 0.10 78.24 14:55:01 3 26.12 0.00 3.25 1.72 0.05 68.86 14:55:01 4 10.51 0.00 1.80 10.39 0.03 77.27 14:55:01 5 10.02 0.00 1.30 0.45 0.03 88.19 14:55:01 6 9.96 0.00 0.80 0.59 0.03 88.62 14:55:01 7 6.46 0.00 1.10 3.90 0.03 88.50 14:56:01 all 18.97 0.00 7.91 6.89 0.07 66.16 14:56:01 0 16.22 0.00 7.98 2.35 0.08 73.36 14:56:01 1 16.89 0.00 7.36 2.68 0.07 73.00 14:56:01 2 19.02 0.00 7.14 10.39 0.07 63.38 14:56:01 3 15.80 0.00 7.61 2.60 0.05 73.94 14:56:01 4 16.07 0.00 8.00 1.13 0.05 74.75 14:56:01 5 33.99 0.00 9.68 19.91 0.08 36.33 14:56:01 6 17.96 0.00 8.12 1.31 0.07 72.53 14:56:01 7 15.83 0.00 7.37 14.85 0.05 61.89 14:57:01 all 27.27 0.00 3.63 2.93 0.08 66.09 14:57:01 0 26.93 0.00 3.46 0.27 0.08 69.26 14:57:01 1 27.79 0.00 3.26 2.96 0.07 65.92 14:57:01 2 28.44 0.00 3.37 1.33 0.08 66.77 14:57:01 3 25.62 0.00 3.44 1.80 0.07 69.07 14:57:01 4 30.38 0.00 3.74 0.79 0.07 65.03 14:57:01 5 26.77 0.00 4.69 15.28 0.08 53.17 14:57:01 6 26.41 0.00 3.54 0.30 0.08 69.66 14:57:01 7 25.85 0.00 3.59 0.74 0.07 69.76 14:58:01 all 5.04 0.00 1.23 0.71 0.04 92.98 14:58:01 0 8.44 0.00 1.79 0.45 0.05 89.27 14:58:01 1 4.96 0.00 1.22 0.67 0.03 93.12 14:58:01 2 3.39 0.00 1.07 0.13 0.03 95.37 14:58:01 3 3.65 0.00 1.15 0.84 0.03 94.33 14:58:01 4 5.23 0.00 1.32 0.02 0.05 93.38 14:58:01 5 6.02 0.00 1.27 0.03 0.05 92.63 14:58:01 6 3.75 0.00 0.94 2.63 0.03 92.65 14:58:01 7 4.90 0.00 1.04 0.89 0.03 93.15 14:59:01 all 8.50 0.00 1.61 0.63 0.05 89.21 14:59:01 0 9.07 0.00 1.86 0.12 0.05 88.90 14:59:01 1 11.55 0.00 1.66 0.40 0.05 86.33 14:59:01 2 7.26 0.00 1.68 0.08 0.05 90.93 14:59:01 3 8.16 0.00 2.11 2.40 0.07 87.27 14:59:01 4 6.90 0.00 0.97 0.02 0.03 92.08 14:59:01 5 7.80 0.00 1.76 1.01 0.05 89.38 14:59:01 6 11.12 0.00 1.62 0.25 0.07 86.94 14:59:01 7 6.15 0.00 1.22 0.77 0.03 91.83 15:00:01 all 1.35 0.00 0.46 0.04 0.03 98.12 15:00:01 0 1.54 0.00 0.48 0.03 0.03 97.91 15:00:01 1 1.29 0.00 0.39 0.00 0.03 98.29 15:00:01 2 1.19 0.00 0.43 0.15 0.03 98.20 15:00:01 3 1.39 0.00 0.43 0.03 0.05 98.09 15:00:01 4 1.75 0.00 0.53 0.03 0.03 97.65 15:00:01 5 1.65 0.00 0.52 0.02 0.02 97.80 15:00:01 6 1.05 0.00 0.44 0.03 0.03 98.44 15:00:01 7 0.99 0.00 0.38 0.03 0.03 98.56 15:01:01 all 6.60 0.00 0.75 0.21 0.03 92.42 15:01:01 0 3.22 0.00 0.77 0.05 0.02 95.94 15:01:01 1 3.82 0.00 0.69 0.03 0.02 95.44 15:01:01 2 0.95 0.00 0.48 0.08 0.02 98.46 15:01:01 3 4.94 0.00 0.72 0.07 0.03 94.24 15:01:01 4 3.66 0.00 0.60 0.02 0.02 95.71 15:01:01 5 16.82 0.00 0.79 0.32 0.03 82.04 15:01:01 6 3.22 0.00 0.92 1.02 0.03 94.81 15:01:01 7 16.14 0.00 1.05 0.10 0.03 82.67 Average: all 11.14 0.00 2.41 2.29 0.05 84.12 Average: 0 9.80 0.00 2.45 1.72 0.05 85.99 Average: 1 10.46 0.00 2.15 1.04 0.04 86.30 Average: 2 9.93 0.00 2.16 3.32 0.06 84.53 Average: 3 12.23 0.00 2.67 1.35 0.05 83.71 Average: 4 10.62 0.00 2.42 1.77 0.04 85.15 Average: 5 14.68 0.00 2.84 5.25 0.05 77.17 Average: 6 10.48 0.00 2.33 0.88 0.05 86.26 Average: 7 10.89 0.00 2.24 3.02 0.04 83.81