Triggered by Gerrit: https://gerrit.onap.org/r/c/policy/docker/+/141264 Running as SYSTEM [EnvInject] - Loading node environment variables. Building remotely on prd-ubuntu1804-docker-8c-8g-20908 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-pap-master-project-csit-verify-pap [ssh-agent] Looking for ssh-agent implementation... [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) $ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-YzGQUcgGpQNy/agent.2110 SSH_AGENT_PID=2112 [ssh-agent] Started. Running ssh-add (command line suppressed) Identity added: /w/workspace/policy-pap-master-project-csit-verify-pap@tmp/private_key_4923728039204946009.key (/w/workspace/policy-pap-master-project-csit-verify-pap@tmp/private_key_4923728039204946009.key) [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) The recommended git tool is: NONE using credential onap-jenkins-ssh Wiping out workspace first. Cloning the remote Git repository Cloning repository git://cloud.onap.org/mirror/policy/docker.git > git init /w/workspace/policy-pap-master-project-csit-verify-pap # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git > git --version # timeout=10 > git --version # 'git version 2.17.1' using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git refs/changes/64/141264/1 # timeout=30 > git rev-parse 473f78ecac5fb75e5968b31a5bab95eaba72c803^{commit} # timeout=10 JENKINS-19022: warning: possible memory leak due to Git plugin usage; see: https://plugins.jenkins.io/git/#remove-git-plugin-buildsbybranch-builddata-script Checking out Revision 473f78ecac5fb75e5968b31a5bab95eaba72c803 (refs/changes/64/141264/1) > git config core.sparsecheckout # timeout=10 > git checkout -f 473f78ecac5fb75e5968b31a5bab95eaba72c803 # timeout=30 Commit message: "Add Fix fail handling in ACM runtime in CSIT" > git rev-parse FETCH_HEAD^{commit} # timeout=10 > git rev-list --no-walk 8746ba7d00fb7412b3f40b6e85f47ce67cf7969c # timeout=10 provisioning config files... copy managed file [npmrc] to file:/home/jenkins/.npmrc copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf [policy-pap-master-project-csit-verify-pap] $ /bin/bash /tmp/jenkins3345534069515421422.sh ---> python-tools-install.sh Setup pyenv: * system (set by /opt/pyenv/version) * 3.8.13 (set by /opt/pyenv/version) * 3.9.13 (set by /opt/pyenv/version) * 3.10.6 (set by /opt/pyenv/version) lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-nDki lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-nDki/bin to PATH Generating Requirements File Python 3.10.6 pip 25.1.1 from /tmp/venv-nDki/lib/python3.10/site-packages/pip (python 3.10) appdirs==1.4.4 argcomplete==3.6.2 aspy.yaml==1.3.0 attrs==25.3.0 autopage==0.5.2 beautifulsoup4==4.13.4 boto3==1.38.36 botocore==1.38.36 bs4==0.0.2 cachetools==5.5.2 certifi==2025.4.26 cffi==1.17.1 cfgv==3.4.0 chardet==5.2.0 charset-normalizer==3.4.2 click==8.2.1 cliff==4.10.0 cmd2==2.6.1 cryptography==3.3.2 debtcollector==3.0.0 decorator==5.2.1 defusedxml==0.7.1 Deprecated==1.2.18 distlib==0.3.9 dnspython==2.7.0 docker==7.1.0 dogpile.cache==1.4.0 durationpy==0.10 email_validator==2.2.0 filelock==3.18.0 future==1.0.0 gitdb==4.0.12 GitPython==3.1.44 google-auth==2.40.3 httplib2==0.22.0 identify==2.6.12 idna==3.10 importlib-resources==1.5.0 iso8601==2.1.0 Jinja2==3.1.6 jmespath==1.0.1 jsonpatch==1.33 jsonpointer==3.0.0 jsonschema==4.24.0 jsonschema-specifications==2025.4.1 keystoneauth1==5.11.1 kubernetes==33.1.0 lftools==0.37.13 lxml==5.4.0 MarkupSafe==3.0.2 msgpack==1.1.1 multi_key_dict==2.0.3 munch==4.0.0 netaddr==1.3.0 niet==1.4.2 nodeenv==1.9.1 oauth2client==4.1.3 oauthlib==3.2.2 openstacksdk==4.6.0 os-client-config==2.1.0 os-service-types==1.7.0 osc-lib==4.0.2 oslo.config==9.8.0 oslo.context==6.0.0 oslo.i18n==6.5.1 oslo.log==7.1.0 oslo.serialization==5.7.0 oslo.utils==9.0.0 packaging==25.0 pbr==6.1.1 platformdirs==4.3.8 prettytable==3.16.0 psutil==7.0.0 pyasn1==0.6.1 pyasn1_modules==0.4.2 pycparser==2.22 pygerrit2==2.0.15 PyGithub==2.6.1 PyJWT==2.10.1 PyNaCl==1.5.0 pyparsing==2.4.7 pyperclip==1.9.0 pyrsistent==0.20.0 python-cinderclient==9.7.0 python-dateutil==2.9.0.post0 python-heatclient==4.2.0 python-jenkins==1.8.2 python-keystoneclient==5.6.0 python-magnumclient==4.8.1 python-openstackclient==8.1.0 python-swiftclient==4.8.0 PyYAML==6.0.2 referencing==0.36.2 requests==2.32.4 requests-oauthlib==2.0.0 requestsexceptions==1.4.0 rfc3986==2.0.0 rpds-py==0.25.1 rsa==4.9.1 ruamel.yaml==0.18.14 ruamel.yaml.clib==0.2.12 s3transfer==0.13.0 simplejson==3.20.1 six==1.17.0 smmap==5.0.2 soupsieve==2.7 stevedore==5.4.1 tabulate==0.9.0 toml==0.10.2 tomlkit==0.13.3 tqdm==4.67.1 typing_extensions==4.14.0 tzdata==2025.2 urllib3==1.26.20 virtualenv==20.31.2 wcwidth==0.2.13 websocket-client==1.8.0 wrapt==1.17.2 xdg==6.0.0 xmltodict==0.14.2 yq==3.4.3 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SET_JDK_VERSION=openjdk17 GIT_URL="git://cloud.onap.org/mirror" [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-verify-pap] $ /bin/sh /tmp/jenkins2289920811199568227.sh ---> update-java-alternatives.sh ---> Updating Java version ---> Ubuntu/Debian system detected update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode openjdk version "17.0.4" 2022-07-19 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-verify-pap] $ /bin/sh -xe /tmp/jenkins11130092825496242988.sh + /w/workspace/policy-pap-master-project-csit-verify-pap/csit/run-project-csit.sh pap WARNING! Using --password via the CLI is insecure. Use --password-stdin. WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded docker: 'compose' is not a docker command. See 'docker --help' Docker Compose Plugin not installed. Installing now... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 17 60.2M 17 10.7M 0 0 31.7M 0 0:00:01 --:--:-- 0:00:01 31.7M 100 60.2M 100 60.2M 0 0 60.1M 0 0:00:01 0:00:01 --:--:-- 74.6M Setting project configuration for: pap Configuring docker compose... Starting apex-pdp using postgres + Grafana/Prometheus kafka Pulling pap Pulling policy-db-migrator Pulling zookeeper Pulling prometheus Pulling api Pulling simulator Pulling apex-pdp Pulling postgres Pulling grafana Pulling da9db072f522 Pulling fs layer 96e38c8865ba Pulling fs layer e5d7009d9e55 Pulling fs layer 1ec5fb03eaee Pulling fs layer d3165a332ae3 Pulling fs layer c124ba1a8b26 Pulling fs layer 6394804c2196 Pulling fs layer 1ec5fb03eaee Waiting d3165a332ae3 Waiting c124ba1a8b26 Waiting 6394804c2196 Waiting da9db072f522 Downloading [> ] 48.06kB/3.624MB e5d7009d9e55 Downloading [==================================================>] 295B/295B e5d7009d9e55 Verifying Checksum e5d7009d9e55 Download complete da9db072f522 Pulling fs layer 4ba79830ebce Pulling fs layer d223479d7367 Pulling fs layer ece604b40811 Pulling fs layer c01e672f2391 Pulling fs layer da9db072f522 Downloading [> ] 48.06kB/3.624MB c01e672f2391 Waiting ece604b40811 Waiting 4ba79830ebce Waiting 96e38c8865ba Downloading [> ] 539.6kB/71.91MB 1ec5fb03eaee Downloading [=> ] 3.001kB/127kB 1ec5fb03eaee Downloading [==================================================>] 127kB/127kB 1ec5fb03eaee Verifying Checksum 1ec5fb03eaee Download complete da9db072f522 Pulling fs layer e0a9246a993d Pulling fs layer 5179ab305f38 Pulling fs layer 18ce86a3284e Pulling fs layer 098efa8b34b7 Pulling fs layer 614e034e242f Pulling fs layer da9db072f522 Downloading [> ] 48.06kB/3.624MB e0a9246a993d Waiting 18ce86a3284e Waiting 5179ab305f38 Waiting 098efa8b34b7 Waiting 614e034e242f Waiting da9db072f522 Pulling fs layer 96e38c8865ba Pulling fs layer 5e06c6bed798 Pulling fs layer da9db072f522 Downloading [> ] 48.06kB/3.624MB 684be6598fc9 Pulling fs layer 0d92cad902ba Pulling fs layer dcc0c3b2850c Pulling fs layer eb7cda286a15 Pulling fs layer 0d92cad902ba Waiting dcc0c3b2850c Waiting 96e38c8865ba Downloading [> ] 539.6kB/71.91MB 5e06c6bed798 Waiting d3165a332ae3 Downloading [==================================================>] 1.328kB/1.328kB d3165a332ae3 Verifying Checksum d3165a332ae3 Download complete da9db072f522 Pulling fs layer 56aca8a42329 Pulling fs layer fbe227156a9a Pulling fs layer b56567b07821 Pulling fs layer f243361b999b Pulling fs layer 7abf0dc59d35 Pulling fs layer 991de477d40a Pulling fs layer 5efc16ba9cdc Pulling fs layer da9db072f522 Downloading [> ] 48.06kB/3.624MB 56aca8a42329 Waiting fbe227156a9a Waiting b56567b07821 Waiting 7abf0dc59d35 Waiting 991de477d40a Waiting f243361b999b Waiting 5efc16ba9cdc Waiting da9db072f522 Download complete da9db072f522 Download complete da9db072f522 Download complete da9db072f522 Download complete da9db072f522 Download complete da9db072f522 Extracting [> ] 65.54kB/3.624MB da9db072f522 Extracting [> ] 65.54kB/3.624MB da9db072f522 Extracting [> ] 65.54kB/3.624MB da9db072f522 Extracting [> ] 65.54kB/3.624MB da9db072f522 Extracting [> ] 65.54kB/3.624MB c124ba1a8b26 Downloading [> ] 539.6kB/91.87MB 1e017ebebdbd Pulling fs layer 55f2b468da67 Pulling fs layer 82bfc142787e Pulling fs layer 46baca71a4ef Pulling fs layer b0e0ef7895f4 Pulling fs layer c0c90eeb8aca Pulling fs layer 5cfb27c10ea5 Pulling fs layer 40a5eed61bb0 Pulling fs layer e040ea11fa10 Pulling fs layer 09d5a3f70313 Pulling fs layer 356f5c2c843b Pulling fs layer 1e017ebebdbd Waiting 55f2b468da67 Waiting 82bfc142787e Waiting 46baca71a4ef Waiting b0e0ef7895f4 Waiting c0c90eeb8aca Waiting 5cfb27c10ea5 Waiting 40a5eed61bb0 Waiting e040ea11fa10 Waiting 09d5a3f70313 Waiting 356f5c2c843b Waiting 6394804c2196 Download complete 9fa9226be034 Pulling fs layer 1617e25568b2 Pulling fs layer 6ac0e4adf315 Pulling fs layer f3b09c502777 Pulling fs layer 408012a7b118 Pulling fs layer 44986281b8b9 Pulling fs layer bf70c5107ab5 Pulling fs layer 1ccde423731d Pulling fs layer 7221d93db8a9 Pulling fs layer 7df673c7455d Pulling fs layer 44986281b8b9 Waiting bf70c5107ab5 Waiting 1ccde423731d Waiting 7221d93db8a9 Waiting 9fa9226be034 Waiting 1617e25568b2 Waiting 6ac0e4adf315 Waiting f3b09c502777 Waiting 408012a7b118 Waiting 7df673c7455d Waiting 2d429b9e73a6 Pulling fs layer 46eab5b44a35 Pulling fs layer c4d302cc468d Pulling fs layer 01e0882c90d9 Pulling fs layer 531ee2cf3c0c Pulling fs layer ed54a7dee1d8 Pulling fs layer 12c5c803443f Pulling fs layer 46eab5b44a35 Waiting e27c75a98748 Pulling fs layer 2d429b9e73a6 Waiting e73cb4a42719 Pulling fs layer c4d302cc468d Waiting 01e0882c90d9 Waiting 531ee2cf3c0c Waiting a83b68436f09 Pulling fs layer ed54a7dee1d8 Waiting 787d6bee9571 Pulling fs layer 12c5c803443f Waiting 13ff0988aaea Pulling fs layer e73cb4a42719 Waiting a83b68436f09 Waiting 787d6bee9571 Waiting 4b82842ab819 Pulling fs layer 7e568a0dc8fb Pulling fs layer 7e568a0dc8fb Waiting 13ff0988aaea Waiting 4b82842ab819 Waiting f18232174bc9 Pulling fs layer 65babbe3dfe5 Pulling fs layer 651b0ba49b07 Pulling fs layer d953cde4314b Pulling fs layer aecd4cb03450 Pulling fs layer 13fa68ca8757 Pulling fs layer f836d47fdc4d Pulling fs layer 8b5292c940e1 Pulling fs layer 454a4350d439 Pulling fs layer 9a8c18aee5ea Pulling fs layer f18232174bc9 Waiting 65babbe3dfe5 Waiting 651b0ba49b07 Waiting d953cde4314b Waiting aecd4cb03450 Waiting 13fa68ca8757 Waiting f836d47fdc4d Waiting 8b5292c940e1 Waiting 454a4350d439 Waiting 9a8c18aee5ea Waiting eca0188f477e Pulling fs layer e444bcd4d577 Pulling fs layer eabd8714fec9 Pulling fs layer 45fd2fec8a19 Pulling fs layer 8f10199ed94b Pulling fs layer f963a77d2726 Pulling fs layer f3a82e9f1761 Pulling fs layer 79161a3f5362 Pulling fs layer e444bcd4d577 Waiting 9c266ba63f51 Pulling fs layer 8f10199ed94b Waiting f963a77d2726 Waiting eca0188f477e Waiting 2e8a7df9c2ee Pulling fs layer f3a82e9f1761 Waiting 10f05dd8b1db Pulling fs layer 9c266ba63f51 Waiting 2e8a7df9c2ee Waiting 41dac8b43ba6 Pulling fs layer 71a9f6a9ab4d Pulling fs layer 79161a3f5362 Waiting da3ed5db7103 Pulling fs layer 10f05dd8b1db Waiting 41dac8b43ba6 Waiting 71a9f6a9ab4d Waiting c955f6e31a04 Pulling fs layer da3ed5db7103 Waiting c955f6e31a04 Waiting 4ba79830ebce Downloading [> ] 539.6kB/166.8MB 96e38c8865ba Downloading [======> ] 9.731MB/71.91MB 96e38c8865ba Downloading [======> ] 9.731MB/71.91MB da9db072f522 Extracting [==========> ] 786.4kB/3.624MB da9db072f522 Extracting [==========> ] 786.4kB/3.624MB da9db072f522 Extracting [==========> ] 786.4kB/3.624MB da9db072f522 Extracting [==========> ] 786.4kB/3.624MB da9db072f522 Extracting [==========> ] 786.4kB/3.624MB c124ba1a8b26 Downloading [======> ] 11.89MB/91.87MB 4ba79830ebce Downloading [=> ] 5.406MB/166.8MB da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB 96e38c8865ba Downloading [===============> ] 22.71MB/71.91MB 96e38c8865ba Downloading [===============> ] 22.71MB/71.91MB c124ba1a8b26 Downloading [=============> ] 24.33MB/91.87MB da9db072f522 Pull complete da9db072f522 Pull complete da9db072f522 Pull complete da9db072f522 Pull complete da9db072f522 Pull complete 4ba79830ebce Downloading [===> ] 12.43MB/166.8MB 96e38c8865ba Downloading [=========================> ] 36.76MB/71.91MB 96e38c8865ba Downloading [=========================> ] 36.76MB/71.91MB c124ba1a8b26 Downloading [======================> ] 41.09MB/91.87MB 4ba79830ebce Downloading [======> ] 22.17MB/166.8MB 96e38c8865ba Downloading [=====================================> ] 53.53MB/71.91MB 96e38c8865ba Downloading [=====================================> ] 53.53MB/71.91MB c124ba1a8b26 Downloading [===============================> ] 57.31MB/91.87MB 4ba79830ebce Downloading [=========> ] 31.9MB/166.8MB 96e38c8865ba Downloading [================================================> ] 69.2MB/71.91MB 96e38c8865ba Downloading [================================================> ] 69.2MB/71.91MB 96e38c8865ba Verifying Checksum 96e38c8865ba Download complete 96e38c8865ba Verifying Checksum 96e38c8865ba Download complete c124ba1a8b26 Downloading [=======================================> ] 71.91MB/91.87MB d223479d7367 Downloading [> ] 80.82kB/6.742MB 4ba79830ebce Downloading [=============> ] 45.42MB/166.8MB 96e38c8865ba Extracting [> ] 557.1kB/71.91MB 96e38c8865ba Extracting [> ] 557.1kB/71.91MB c124ba1a8b26 Downloading [===============================================> ] 88.13MB/91.87MB d223479d7367 Downloading [===================> ] 2.62MB/6.742MB c124ba1a8b26 Verifying Checksum c124ba1a8b26 Download complete ece604b40811 Downloading [==================================================>] 303B/303B ece604b40811 Verifying Checksum ece604b40811 Download complete 4ba79830ebce Downloading [==================> ] 62.18MB/166.8MB d223479d7367 Verifying Checksum d223479d7367 Download complete c01e672f2391 Downloading [> ] 539.6kB/263.6MB e0a9246a993d Downloading [> ] 539.6kB/71.91MB 96e38c8865ba Extracting [===> ] 4.456MB/71.91MB 96e38c8865ba Extracting [===> ] 4.456MB/71.91MB 4ba79830ebce Downloading [=======================> ] 78.4MB/166.8MB c01e672f2391 Downloading [=> ] 7.568MB/263.6MB e0a9246a993d Downloading [===> ] 4.324MB/71.91MB 96e38c8865ba Extracting [======> ] 9.47MB/71.91MB 96e38c8865ba Extracting [======> ] 9.47MB/71.91MB 4ba79830ebce Downloading [=============================> ] 96.78MB/166.8MB c01e672f2391 Downloading [===> ] 19.46MB/263.6MB e0a9246a993d Downloading [======> ] 9.19MB/71.91MB 96e38c8865ba Extracting [==========> ] 15.04MB/71.91MB 96e38c8865ba Extracting [==========> ] 15.04MB/71.91MB 4ba79830ebce Downloading [=================================> ] 113MB/166.8MB c01e672f2391 Downloading [======> ] 31.9MB/263.6MB e0a9246a993d Downloading [===========> ] 16.76MB/71.91MB 96e38c8865ba Extracting [===============> ] 21.73MB/71.91MB 96e38c8865ba Extracting [===============> ] 21.73MB/71.91MB 4ba79830ebce Downloading [=======================================> ] 130.3MB/166.8MB c01e672f2391 Downloading [========> ] 47.04MB/263.6MB e0a9246a993d Downloading [=================> ] 25.41MB/71.91MB 96e38c8865ba Extracting [==================> ] 26.18MB/71.91MB 96e38c8865ba Extracting [==================> ] 26.18MB/71.91MB 4ba79830ebce Downloading [============================================> ] 148.1MB/166.8MB c01e672f2391 Downloading [============> ] 63.8MB/263.6MB e0a9246a993d Downloading [========================> ] 35.68MB/71.91MB 96e38c8865ba Extracting [=====================> ] 31.2MB/71.91MB 96e38c8865ba Extracting [=====================> ] 31.2MB/71.91MB 4ba79830ebce Verifying Checksum 4ba79830ebce Download complete c01e672f2391 Downloading [==============> ] 77.86MB/263.6MB e0a9246a993d Downloading [============================> ] 40.55MB/71.91MB 5179ab305f38 Downloading [==================================================>] 306B/306B 5179ab305f38 Download complete 96e38c8865ba Extracting [=========================> ] 37.32MB/71.91MB 96e38c8865ba Extracting [=========================> ] 37.32MB/71.91MB 18ce86a3284e Downloading [> ] 539.6kB/182.3MB 4ba79830ebce Extracting [> ] 557.1kB/166.8MB c01e672f2391 Downloading [=================> ] 94.62MB/263.6MB e0a9246a993d Downloading [=====================================> ] 54.07MB/71.91MB 96e38c8865ba Extracting [============================> ] 41.22MB/71.91MB 96e38c8865ba Extracting [============================> ] 41.22MB/71.91MB 18ce86a3284e Downloading [> ] 3.243MB/182.3MB c01e672f2391 Downloading [=====================> ] 110.8MB/263.6MB 4ba79830ebce Extracting [=> ] 4.456MB/166.8MB e0a9246a993d Downloading [===============================================> ] 68.66MB/71.91MB e0a9246a993d Verifying Checksum e0a9246a993d Download complete 96e38c8865ba Extracting [==============================> ] 44.01MB/71.91MB 96e38c8865ba Extracting [==============================> ] 44.01MB/71.91MB 098efa8b34b7 Downloading [==================================================>] 1.154kB/1.154kB 098efa8b34b7 Verifying Checksum 098efa8b34b7 Download complete 18ce86a3284e Downloading [==> ] 8.109MB/182.3MB 614e034e242f Downloading [==================================================>] 1.126kB/1.126kB 614e034e242f Download complete c01e672f2391 Downloading [========================> ] 127.1MB/263.6MB 4ba79830ebce Extracting [=====> ] 17.83MB/166.8MB 5e06c6bed798 Downloading [==================================================>] 296B/296B 5e06c6bed798 Verifying Checksum 5e06c6bed798 Download complete 684be6598fc9 Downloading [=> ] 3.001kB/127.5kB 684be6598fc9 Downloading [==================================================>] 127.5kB/127.5kB 684be6598fc9 Verifying Checksum 684be6598fc9 Download complete 0d92cad902ba Downloading [==================================================>] 1.148kB/1.148kB 96e38c8865ba Extracting [================================> ] 46.79MB/71.91MB 0d92cad902ba Verifying Checksum 96e38c8865ba Extracting [================================> ] 46.79MB/71.91MB 0d92cad902ba Download complete 18ce86a3284e Downloading [====> ] 16.76MB/182.3MB e0a9246a993d Extracting [> ] 557.1kB/71.91MB dcc0c3b2850c Downloading [> ] 539.6kB/76.12MB 4ba79830ebce Extracting [========> ] 26.74MB/166.8MB c01e672f2391 Downloading [==========================> ] 140MB/263.6MB 96e38c8865ba Extracting [==================================> ] 50.14MB/71.91MB 96e38c8865ba Extracting [==================================> ] 50.14MB/71.91MB 18ce86a3284e Downloading [=======> ] 27.57MB/182.3MB dcc0c3b2850c Downloading [====> ] 6.487MB/76.12MB e0a9246a993d Extracting [===> ] 4.456MB/71.91MB 4ba79830ebce Extracting [===========> ] 37.88MB/166.8MB c01e672f2391 Downloading [============================> ] 152.5MB/263.6MB 96e38c8865ba Extracting [====================================> ] 52.92MB/71.91MB 96e38c8865ba Extracting [====================================> ] 52.92MB/71.91MB 18ce86a3284e Downloading [==========> ] 40.01MB/182.3MB dcc0c3b2850c Downloading [=======> ] 11.35MB/76.12MB e0a9246a993d Extracting [=====> ] 8.356MB/71.91MB c01e672f2391 Downloading [===============================> ] 166MB/263.6MB 4ba79830ebce Extracting [==============> ] 47.35MB/166.8MB 96e38c8865ba Extracting [=======================================> ] 56.26MB/71.91MB 96e38c8865ba Extracting [=======================================> ] 56.26MB/71.91MB 18ce86a3284e Downloading [===============> ] 55.15MB/182.3MB e0a9246a993d Extracting [========> ] 11.7MB/71.91MB dcc0c3b2850c Downloading [=============> ] 21.09MB/76.12MB c01e672f2391 Downloading [==================================> ] 181.7MB/263.6MB 4ba79830ebce Extracting [=================> ] 57.38MB/166.8MB 96e38c8865ba Extracting [==========================================> ] 60.72MB/71.91MB 96e38c8865ba Extracting [==========================================> ] 60.72MB/71.91MB 18ce86a3284e Downloading [==================> ] 69.2MB/182.3MB dcc0c3b2850c Downloading [=======================> ] 35.68MB/76.12MB e0a9246a993d Extracting [===========> ] 16.15MB/71.91MB c01e672f2391 Downloading [=====================================> ] 197.9MB/263.6MB 4ba79830ebce Extracting [====================> ] 69.07MB/166.8MB 96e38c8865ba Extracting [=============================================> ] 65.18MB/71.91MB 96e38c8865ba Extracting [=============================================> ] 65.18MB/71.91MB 18ce86a3284e Downloading [=======================> ] 85.43MB/182.3MB dcc0c3b2850c Downloading [================================> ] 49.74MB/76.12MB c01e672f2391 Downloading [========================================> ] 213.6MB/263.6MB e0a9246a993d Extracting [=============> ] 19.5MB/71.91MB 4ba79830ebce Extracting [=======================> ] 76.87MB/166.8MB 18ce86a3284e Downloading [===========================> ] 101.1MB/182.3MB 96e38c8865ba Extracting [================================================> ] 70.19MB/71.91MB 96e38c8865ba Extracting [================================================> ] 70.19MB/71.91MB dcc0c3b2850c Downloading [========================================> ] 61.09MB/76.12MB c01e672f2391 Downloading [===========================================> ] 229.8MB/263.6MB 4ba79830ebce Extracting [=========================> ] 84.12MB/166.8MB e0a9246a993d Extracting [================> ] 23.4MB/71.91MB 96e38c8865ba Extracting [==================================================>] 71.91MB/71.91MB 96e38c8865ba Extracting [==================================================>] 71.91MB/71.91MB 18ce86a3284e Downloading [===============================> ] 115.2MB/182.3MB dcc0c3b2850c Downloading [=================================================> ] 74.61MB/76.12MB c01e672f2391 Downloading [==============================================> ] 243.3MB/263.6MB dcc0c3b2850c Verifying Checksum dcc0c3b2850c Download complete 4ba79830ebce Extracting [==========================> ] 88.01MB/166.8MB eb7cda286a15 Downloading [==================================================>] 1.119kB/1.119kB eb7cda286a15 Verifying Checksum eb7cda286a15 Download complete e0a9246a993d Extracting [==================> ] 26.18MB/71.91MB 96e38c8865ba Pull complete 96e38c8865ba Pull complete e5d7009d9e55 Extracting [==================================================>] 295B/295B 5e06c6bed798 Extracting [==================================================>] 296B/296B e5d7009d9e55 Extracting [==================================================>] 295B/295B 5e06c6bed798 Extracting [==================================================>] 296B/296B 56aca8a42329 Downloading [> ] 539.6kB/71.91MB 18ce86a3284e Downloading [===================================> ] 129.2MB/182.3MB c01e672f2391 Downloading [================================================> ] 257.9MB/263.6MB 4ba79830ebce Extracting [===========================> ] 91.36MB/166.8MB e0a9246a993d Extracting [====================> ] 28.97MB/71.91MB c01e672f2391 Verifying Checksum c01e672f2391 Download complete 56aca8a42329 Downloading [===> ] 5.406MB/71.91MB 18ce86a3284e Downloading [=======================================> ] 144.9MB/182.3MB fbe227156a9a Downloading [> ] 146.4kB/14.63MB e5d7009d9e55 Pull complete 5e06c6bed798 Pull complete 1ec5fb03eaee Extracting [============> ] 32.77kB/127kB 684be6598fc9 Extracting [============> ] 32.77kB/127.5kB 684be6598fc9 Extracting [==================================================>] 127.5kB/127.5kB 1ec5fb03eaee Extracting [==================================================>] 127kB/127kB 684be6598fc9 Extracting [==================================================>] 127.5kB/127.5kB e0a9246a993d Extracting [======================> ] 32.87MB/71.91MB 4ba79830ebce Extracting [============================> ] 95.26MB/166.8MB 56aca8a42329 Downloading [=============> ] 20MB/71.91MB 18ce86a3284e Downloading [===========================================> ] 160MB/182.3MB 1ec5fb03eaee Pull complete d3165a332ae3 Extracting [==================================================>] 1.328kB/1.328kB d3165a332ae3 Extracting [==================================================>] 1.328kB/1.328kB 4ba79830ebce Extracting [=============================> ] 99.16MB/166.8MB e0a9246a993d Extracting [=========================> ] 36.77MB/71.91MB 684be6598fc9 Pull complete 0d92cad902ba Extracting [==================================================>] 1.148kB/1.148kB 18ce86a3284e Downloading [================================================> ] 175.2MB/182.3MB 0d92cad902ba Extracting [==================================================>] 1.148kB/1.148kB 18ce86a3284e Verifying Checksum 18ce86a3284e Download complete 4ba79830ebce Extracting [===============================> ] 103.6MB/166.8MB e0a9246a993d Extracting [============================> ] 40.67MB/71.91MB d3165a332ae3 Pull complete 0d92cad902ba Pull complete 4ba79830ebce Extracting [================================> ] 108.6MB/166.8MB e0a9246a993d Extracting [==============================> ] 44.01MB/71.91MB c124ba1a8b26 Extracting [> ] 557.1kB/91.87MB dcc0c3b2850c Extracting [> ] 557.1kB/76.12MB 4ba79830ebce Extracting [=================================> ] 112.5MB/166.8MB e0a9246a993d Extracting [================================> ] 47.35MB/71.91MB c124ba1a8b26 Extracting [====> ] 8.356MB/91.87MB dcc0c3b2850c Extracting [======> ] 10.58MB/76.12MB 4ba79830ebce Extracting [===================================> ] 117MB/166.8MB e0a9246a993d Extracting [===================================> ] 51.25MB/71.91MB c124ba1a8b26 Extracting [========> ] 15.04MB/91.87MB dcc0c3b2850c Extracting [============> ] 19.5MB/76.12MB 4ba79830ebce Extracting [====================================> ] 120.3MB/166.8MB e0a9246a993d Extracting [=====================================> ] 53.48MB/71.91MB c124ba1a8b26 Extracting [============> ] 23.4MB/91.87MB dcc0c3b2850c Extracting [===================> ] 30.08MB/76.12MB 4ba79830ebce Extracting [=====================================> ] 124.8MB/166.8MB e0a9246a993d Extracting [=======================================> ] 56.82MB/71.91MB c124ba1a8b26 Extracting [==================> ] 33.42MB/91.87MB dcc0c3b2850c Extracting [=========================> ] 39.55MB/76.12MB 4ba79830ebce Extracting [======================================> ] 129.2MB/166.8MB c124ba1a8b26 Extracting [=======================> ] 44.01MB/91.87MB e0a9246a993d Extracting [==========================================> ] 60.72MB/71.91MB dcc0c3b2850c Extracting [==============================> ] 45.68MB/76.12MB 4ba79830ebce Extracting [=======================================> ] 133.1MB/166.8MB c124ba1a8b26 Extracting [==============================> ] 56.26MB/91.87MB e0a9246a993d Extracting [============================================> ] 64.62MB/71.91MB dcc0c3b2850c Extracting [===================================> ] 54.03MB/76.12MB 4ba79830ebce Extracting [=========================================> ] 137.6MB/166.8MB c124ba1a8b26 Extracting [====================================> ] 66.85MB/91.87MB e0a9246a993d Extracting [===============================================> ] 68.52MB/71.91MB dcc0c3b2850c Extracting [===========================================> ] 65.73MB/76.12MB 4ba79830ebce Extracting [==========================================> ] 142.6MB/166.8MB c124ba1a8b26 Extracting [=========================================> ] 75.76MB/91.87MB e0a9246a993d Extracting [==================================================>] 71.91MB/71.91MB dcc0c3b2850c Extracting [================================================> ] 73.53MB/76.12MB 4ba79830ebce Extracting [===========================================> ] 145.4MB/166.8MB dcc0c3b2850c Extracting [==================================================>] 76.12MB/76.12MB c124ba1a8b26 Extracting [============================================> ] 81.89MB/91.87MB e0a9246a993d Pull complete 5179ab305f38 Extracting [==================================================>] 306B/306B 5179ab305f38 Extracting [==================================================>] 306B/306B dcc0c3b2850c Pull complete eb7cda286a15 Extracting [==================================================>] 1.119kB/1.119kB eb7cda286a15 Extracting [==================================================>] 1.119kB/1.119kB c124ba1a8b26 Extracting [==================================================>] 91.87MB/91.87MB 4ba79830ebce Extracting [============================================> ] 149.8MB/166.8MB c124ba1a8b26 Pull complete 6394804c2196 Extracting [==================================================>] 1.299kB/1.299kB 6394804c2196 Extracting [==================================================>] 1.299kB/1.299kB 5179ab305f38 Pull complete eb7cda286a15 Pull complete api Pulled 4ba79830ebce Extracting [==============================================> ] 155.4MB/166.8MB 18ce86a3284e Extracting [> ] 557.1kB/182.3MB 6394804c2196 Pull complete pap Pulled 4ba79830ebce Extracting [===============================================> ] 159.3MB/166.8MB 18ce86a3284e Extracting [===> ] 11.7MB/182.3MB 4ba79830ebce Extracting [================================================> ] 163.2MB/166.8MB 18ce86a3284e Extracting [======> ] 23.95MB/182.3MB 4ba79830ebce Extracting [=================================================> ] 166MB/166.8MB 18ce86a3284e Extracting [========> ] 31.2MB/182.3MB 4ba79830ebce Extracting [==================================================>] 166.8MB/166.8MB 18ce86a3284e Extracting [===========> ] 40.67MB/182.3MB 18ce86a3284e Extracting [==============> ] 52.92MB/182.3MB 18ce86a3284e Extracting [==================> ] 65.73MB/182.3MB 4ba79830ebce Pull complete d223479d7367 Extracting [> ] 98.3kB/6.742MB 18ce86a3284e Extracting [======================> ] 80.22MB/182.3MB d223479d7367 Extracting [====> ] 589.8kB/6.742MB 18ce86a3284e Extracting [=========================> ] 93.03MB/182.3MB d223479d7367 Extracting [=====================> ] 2.851MB/6.742MB 18ce86a3284e Extracting [============================> ] 103.6MB/182.3MB d223479d7367 Extracting [==================================> ] 4.719MB/6.742MB 18ce86a3284e Extracting [===============================> ] 116.4MB/182.3MB d223479d7367 Extracting [=============================================> ] 6.193MB/6.742MB d223479d7367 Extracting [==================================================>] 6.742MB/6.742MB 18ce86a3284e Extracting [==================================> ] 127.6MB/182.3MB d223479d7367 Pull complete ece604b40811 Extracting [==================================================>] 303B/303B ece604b40811 Extracting [==================================================>] 303B/303B 18ce86a3284e Extracting [======================================> ] 139.8MB/182.3MB 56aca8a42329 Downloading [===============> ] 22.71MB/71.91MB fbe227156a9a Downloading [=========> ] 2.653MB/14.63MB b56567b07821 Downloading [==================================================>] 1.077kB/1.077kB b56567b07821 Verifying Checksum b56567b07821 Download complete ece604b40811 Pull complete f243361b999b Downloading [============================> ] 3.003kB/5.242kB f243361b999b Downloading [==================================================>] 5.242kB/5.242kB f243361b999b Download complete 7abf0dc59d35 Downloading [==================================================>] 1.035kB/1.035kB 7abf0dc59d35 Verifying Checksum 7abf0dc59d35 Download complete 18ce86a3284e Extracting [=========================================> ] 151MB/182.3MB 991de477d40a Downloading [==================================================>] 1.035kB/1.035kB 991de477d40a Verifying Checksum 991de477d40a Download complete 5efc16ba9cdc Downloading [=======> ] 3.002kB/19.52kB 5efc16ba9cdc Download complete 56aca8a42329 Downloading [==========================> ] 37.85MB/71.91MB fbe227156a9a Downloading [==================================> ] 10.17MB/14.63MB 1e017ebebdbd Downloading [> ] 375.7kB/37.19MB fbe227156a9a Verifying Checksum fbe227156a9a Download complete 18ce86a3284e Extracting [===========================================> ] 159.3MB/182.3MB 56aca8a42329 Downloading [=====================================> ] 54.07MB/71.91MB c01e672f2391 Extracting [> ] 557.1kB/263.6MB 55f2b468da67 Downloading [> ] 539.6kB/257.9MB 1e017ebebdbd Downloading [====> ] 3.014MB/37.19MB 18ce86a3284e Extracting [==============================================> ] 170.5MB/182.3MB 56aca8a42329 Downloading [================================================> ] 70.29MB/71.91MB 56aca8a42329 Verifying Checksum 56aca8a42329 Download complete c01e672f2391 Extracting [> ] 2.228MB/263.6MB 55f2b468da67 Downloading [> ] 3.784MB/257.9MB 82bfc142787e Downloading [> ] 97.22kB/8.613MB 1e017ebebdbd Downloading [=============> ] 9.797MB/37.19MB 18ce86a3284e Extracting [==================================================>] 182.3MB/182.3MB 18ce86a3284e Pull complete c01e672f2391 Extracting [==> ] 12.81MB/263.6MB 098efa8b34b7 Extracting [==================================================>] 1.154kB/1.154kB 098efa8b34b7 Extracting [==================================================>] 1.154kB/1.154kB 55f2b468da67 Downloading [=> ] 9.731MB/257.9MB 56aca8a42329 Extracting [> ] 557.1kB/71.91MB 82bfc142787e Downloading [================> ] 2.85MB/8.613MB 1e017ebebdbd Downloading [======================> ] 16.96MB/37.19MB c01e672f2391 Extracting [====> ] 22.84MB/263.6MB 55f2b468da67 Downloading [===> ] 15.68MB/257.9MB 098efa8b34b7 Pull complete 82bfc142787e Downloading [===============================> ] 5.504MB/8.613MB 614e034e242f Extracting [==================================================>] 1.126kB/1.126kB 614e034e242f Extracting [==================================================>] 1.126kB/1.126kB 1e017ebebdbd Downloading [================================> ] 24.12MB/37.19MB 56aca8a42329 Extracting [===> ] 4.456MB/71.91MB 82bfc142787e Verifying Checksum 82bfc142787e Download complete c01e672f2391 Extracting [======> ] 31.75MB/263.6MB 46baca71a4ef Downloading [========> ] 3.01kB/18.11kB 46baca71a4ef Downloading [==================================================>] 18.11kB/18.11kB 55f2b468da67 Downloading [====> ] 22.71MB/257.9MB 46baca71a4ef Verifying Checksum 46baca71a4ef Download complete 1e017ebebdbd Downloading [=============================================> ] 33.91MB/37.19MB 614e034e242f Pull complete 56aca8a42329 Extracting [======> ] 10.03MB/71.91MB b0e0ef7895f4 Downloading [> ] 375.7kB/37.01MB simulator Pulled 1e017ebebdbd Verifying Checksum 1e017ebebdbd Download complete c0c90eeb8aca Downloading [==================================================>] 1.105kB/1.105kB c01e672f2391 Extracting [========> ] 44.01MB/263.6MB 5cfb27c10ea5 Downloading [==================================================>] 852B/852B 5cfb27c10ea5 Verifying Checksum 5cfb27c10ea5 Download complete 55f2b468da67 Downloading [======> ] 34.6MB/257.9MB 40a5eed61bb0 Downloading [==================================================>] 98B/98B 40a5eed61bb0 Verifying Checksum 40a5eed61bb0 Download complete 56aca8a42329 Extracting [==========> ] 15.04MB/71.91MB e040ea11fa10 Downloading [==================================================>] 173B/173B e040ea11fa10 Verifying Checksum e040ea11fa10 Download complete b0e0ef7895f4 Downloading [=====> ] 4.144MB/37.01MB 1e017ebebdbd Extracting [> ] 393.2kB/37.19MB 09d5a3f70313 Downloading [> ] 539.6kB/109.2MB c01e672f2391 Extracting [==========> ] 55.15MB/263.6MB 55f2b468da67 Downloading [=========> ] 48.66MB/257.9MB 56aca8a42329 Extracting [=============> ] 20.05MB/71.91MB b0e0ef7895f4 Downloading [============> ] 9.043MB/37.01MB 1e017ebebdbd Extracting [======> ] 4.719MB/37.19MB 09d5a3f70313 Downloading [==> ] 5.406MB/109.2MB c01e672f2391 Extracting [============> ] 64.06MB/263.6MB 55f2b468da67 Downloading [===========> ] 57.85MB/257.9MB 56aca8a42329 Extracting [=================> ] 24.51MB/71.91MB b0e0ef7895f4 Downloading [======================> ] 16.96MB/37.01MB 1e017ebebdbd Extracting [==========> ] 7.864MB/37.19MB 55f2b468da67 Downloading [=============> ] 68.66MB/257.9MB c01e672f2391 Extracting [=============> ] 71.86MB/263.6MB 09d5a3f70313 Downloading [======> ] 14.6MB/109.2MB b0e0ef7895f4 Downloading [==============================> ] 22.23MB/37.01MB 56aca8a42329 Extracting [====================> ] 28.97MB/71.91MB 1e017ebebdbd Extracting [===============> ] 11.4MB/37.19MB 55f2b468da67 Downloading [===============> ] 80.02MB/257.9MB c01e672f2391 Extracting [===============> ] 80.77MB/263.6MB 09d5a3f70313 Downloading [========> ] 18.92MB/109.2MB b0e0ef7895f4 Downloading [===================================> ] 26.38MB/37.01MB 56aca8a42329 Extracting [========================> ] 34.54MB/71.91MB 1e017ebebdbd Extracting [======================> ] 16.52MB/37.19MB 55f2b468da67 Downloading [==================> ] 93.54MB/257.9MB c01e672f2391 Extracting [================> ] 89.13MB/263.6MB 09d5a3f70313 Downloading [==========> ] 23.25MB/109.2MB b0e0ef7895f4 Downloading [==========================================> ] 31.28MB/37.01MB 56aca8a42329 Extracting [===========================> ] 38.99MB/71.91MB 1e017ebebdbd Extracting [============================> ] 20.84MB/37.19MB 55f2b468da67 Downloading [====================> ] 106MB/257.9MB c01e672f2391 Extracting [==================> ] 96.93MB/263.6MB 09d5a3f70313 Downloading [============> ] 28.11MB/109.2MB b0e0ef7895f4 Downloading [================================================> ] 36.17MB/37.01MB b0e0ef7895f4 Verifying Checksum b0e0ef7895f4 Download complete 56aca8a42329 Extracting [==============================> ] 43.45MB/71.91MB 1e017ebebdbd Extracting [===================================> ] 26.35MB/37.19MB 55f2b468da67 Downloading [======================> ] 117.3MB/257.9MB c01e672f2391 Extracting [====================> ] 109.2MB/263.6MB 356f5c2c843b Downloading [=========================================> ] 3.011kB/3.623kB 356f5c2c843b Downloading [==================================================>] 3.623kB/3.623kB 356f5c2c843b Verifying Checksum 356f5c2c843b Download complete 09d5a3f70313 Downloading [===============> ] 33.52MB/109.2MB 9fa9226be034 Downloading [> ] 15.3kB/783kB 55f2b468da67 Downloading [=========================> ] 131.4MB/257.9MB 56aca8a42329 Extracting [================================> ] 47.35MB/71.91MB 1e017ebebdbd Extracting [========================================> ] 30.28MB/37.19MB c01e672f2391 Extracting [======================> ] 118.7MB/263.6MB 09d5a3f70313 Downloading [==================> ] 39.47MB/109.2MB 9fa9226be034 Downloading [======================================> ] 605.1kB/783kB 9fa9226be034 Downloading [==================================================>] 783kB/783kB 9fa9226be034 Download complete 9fa9226be034 Extracting [==> ] 32.77kB/783kB 55f2b468da67 Downloading [============================> ] 147.1MB/257.9MB 1e017ebebdbd Extracting [============================================> ] 33.03MB/37.19MB c01e672f2391 Extracting [========================> ] 127MB/263.6MB 56aca8a42329 Extracting [===================================> ] 50.69MB/71.91MB 1617e25568b2 Downloading [=> ] 15.3kB/480.9kB 09d5a3f70313 Downloading [=====================> ] 46.5MB/109.2MB 1617e25568b2 Download complete 9fa9226be034 Extracting [==================================================>] 783kB/783kB 1e017ebebdbd Extracting [=============================================> ] 34.21MB/37.19MB 55f2b468da67 Downloading [==============================> ] 158.4MB/257.9MB c01e672f2391 Extracting [=========================> ] 133.7MB/263.6MB 56aca8a42329 Extracting [====================================> ] 52.92MB/71.91MB 09d5a3f70313 Downloading [========================> ] 53.53MB/109.2MB 9fa9226be034 Pull complete 1617e25568b2 Extracting [===> ] 32.77kB/480.9kB 6ac0e4adf315 Downloading [> ] 539.6kB/62.07MB 55f2b468da67 Downloading [=================================> ] 173MB/257.9MB c01e672f2391 Extracting [==========================> ] 140.9MB/263.6MB 1e017ebebdbd Extracting [================================================> ] 36.18MB/37.19MB 56aca8a42329 Extracting [=======================================> ] 56.26MB/71.91MB 09d5a3f70313 Downloading [===========================> ] 61.09MB/109.2MB 1e017ebebdbd Extracting [==================================================>] 37.19MB/37.19MB 1617e25568b2 Extracting [==================================> ] 327.7kB/480.9kB 6ac0e4adf315 Downloading [==> ] 2.702MB/62.07MB c01e672f2391 Extracting [===========================> ] 147.1MB/263.6MB 55f2b468da67 Downloading [====================================> ] 187.1MB/257.9MB 1e017ebebdbd Pull complete 09d5a3f70313 Downloading [================================> ] 70.29MB/109.2MB 56aca8a42329 Extracting [=========================================> ] 59.6MB/71.91MB 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB 6ac0e4adf315 Downloading [======> ] 7.568MB/62.07MB c01e672f2391 Extracting [=============================> ] 153.2MB/263.6MB 55f2b468da67 Downloading [======================================> ] 197.9MB/257.9MB 09d5a3f70313 Downloading [====================================> ] 78.94MB/109.2MB 1617e25568b2 Pull complete 56aca8a42329 Extracting [============================================> ] 63.5MB/71.91MB 6ac0e4adf315 Downloading [==========> ] 12.98MB/62.07MB c01e672f2391 Extracting [===============================> ] 167.7MB/263.6MB 55f2b468da67 Downloading [=======================================> ] 201.7MB/257.9MB 09d5a3f70313 Downloading [======================================> ] 84.88MB/109.2MB 56aca8a42329 Extracting [===============================================> ] 67.96MB/71.91MB 6ac0e4adf315 Downloading [==============> ] 17.84MB/62.07MB c01e672f2391 Extracting [==================================> ] 181MB/263.6MB 55f2b468da67 Downloading [========================================> ] 208.7MB/257.9MB 09d5a3f70313 Downloading [========================================> ] 89.21MB/109.2MB 56aca8a42329 Extracting [=================================================> ] 71.86MB/71.91MB 56aca8a42329 Extracting [==================================================>] 71.91MB/71.91MB c01e672f2391 Extracting [====================================> ] 194.4MB/263.6MB 6ac0e4adf315 Downloading [=================> ] 22.17MB/62.07MB 55f2b468da67 Downloading [=========================================> ] 213.6MB/257.9MB 09d5a3f70313 Downloading [===========================================> ] 95.16MB/109.2MB 56aca8a42329 Pull complete fbe227156a9a Extracting [> ] 163.8kB/14.63MB c01e672f2391 Extracting [======================================> ] 203.3MB/263.6MB 6ac0e4adf315 Downloading [======================> ] 28.11MB/62.07MB 55f2b468da67 Downloading [==========================================> ] 219.5MB/257.9MB 09d5a3f70313 Downloading [==============================================> ] 101.1MB/109.2MB fbe227156a9a Extracting [======> ] 1.966MB/14.63MB c01e672f2391 Extracting [========================================> ] 213.9MB/263.6MB 6ac0e4adf315 Downloading [===========================> ] 34.6MB/62.07MB 55f2b468da67 Downloading [===========================================> ] 226MB/257.9MB 09d5a3f70313 Downloading [================================================> ] 106.5MB/109.2MB fbe227156a9a Extracting [====================> ] 6.062MB/14.63MB 09d5a3f70313 Verifying Checksum 09d5a3f70313 Download complete c01e672f2391 Extracting [==========================================> ] 223.9MB/263.6MB 6ac0e4adf315 Downloading [==============================> ] 38.39MB/62.07MB 55f2b468da67 Downloading [============================================> ] 230.3MB/257.9MB f3b09c502777 Downloading [> ] 539.6kB/56.52MB fbe227156a9a Extracting [=============================> ] 8.52MB/14.63MB c01e672f2391 Extracting [============================================> ] 232.3MB/263.6MB 6ac0e4adf315 Downloading [===================================> ] 44.33MB/62.07MB 55f2b468da67 Downloading [=============================================> ] 235.7MB/257.9MB f3b09c502777 Downloading [==> ] 3.243MB/56.52MB fbe227156a9a Extracting [=======================================> ] 11.63MB/14.63MB c01e672f2391 Extracting [==============================================> ] 242.9MB/263.6MB 6ac0e4adf315 Downloading [=======================================> ] 48.66MB/62.07MB 55f2b468da67 Downloading [==============================================> ] 240.1MB/257.9MB f3b09c502777 Downloading [=====> ] 5.946MB/56.52MB fbe227156a9a Extracting [==================================================>] 14.63MB/14.63MB c01e672f2391 Extracting [===============================================> ] 252.3MB/263.6MB 6ac0e4adf315 Downloading [==========================================> ] 52.98MB/62.07MB 55f2b468da67 Downloading [===============================================> ] 243.8MB/257.9MB fbe227156a9a Pull complete b56567b07821 Extracting [==================================================>] 1.077kB/1.077kB b56567b07821 Extracting [==================================================>] 1.077kB/1.077kB f3b09c502777 Downloading [=======> ] 8.109MB/56.52MB c01e672f2391 Extracting [==================================================>] 263.6MB/263.6MB 6ac0e4adf315 Downloading [==============================================> ] 57.31MB/62.07MB 55f2b468da67 Downloading [================================================> ] 248.2MB/257.9MB c01e672f2391 Pull complete b56567b07821 Pull complete f243361b999b Extracting [==================================================>] 5.242kB/5.242kB f243361b999b Extracting [==================================================>] 5.242kB/5.242kB apex-pdp Pulled f3b09c502777 Downloading [===========> ] 12.43MB/56.52MB 6ac0e4adf315 Verifying Checksum 6ac0e4adf315 Download complete 55f2b468da67 Downloading [=================================================> ] 253MB/257.9MB 408012a7b118 Downloading [==================================================>] 637B/637B 408012a7b118 Verifying Checksum 408012a7b118 Download complete 44986281b8b9 Downloading [=====================================> ] 3.011kB/4.022kB 44986281b8b9 Downloading [==================================================>] 4.022kB/4.022kB 44986281b8b9 Verifying Checksum 44986281b8b9 Download complete f3b09c502777 Downloading [===============> ] 17.84MB/56.52MB bf70c5107ab5 Downloading [==================================================>] 1.44kB/1.44kB bf70c5107ab5 Verifying Checksum bf70c5107ab5 Download complete f243361b999b Pull complete 6ac0e4adf315 Extracting [> ] 557.1kB/62.07MB 7abf0dc59d35 Extracting [==================================================>] 1.035kB/1.035kB 7abf0dc59d35 Extracting [==================================================>] 1.035kB/1.035kB 1ccde423731d Downloading [==> ] 3.01kB/61.44kB 1ccde423731d Downloading [==================================================>] 61.44kB/61.44kB 1ccde423731d Verifying Checksum 1ccde423731d Download complete 55f2b468da67 Verifying Checksum 55f2b468da67 Download complete 7221d93db8a9 Downloading [==================================================>] 100B/100B 7221d93db8a9 Verifying Checksum 7221d93db8a9 Download complete 7df673c7455d Downloading [==================================================>] 694B/694B 7df673c7455d Verifying Checksum 7df673c7455d Download complete f3b09c502777 Downloading [=====================> ] 23.79MB/56.52MB 46eab5b44a35 Downloading [==================================================>] 1.168kB/1.168kB 46eab5b44a35 Download complete 2d429b9e73a6 Downloading [> ] 293.8kB/29.13MB 6ac0e4adf315 Extracting [===> ] 4.456MB/62.07MB c4d302cc468d Downloading [> ] 48.06kB/4.534MB 7abf0dc59d35 Pull complete 55f2b468da67 Extracting [> ] 557.1kB/257.9MB 991de477d40a Extracting [==================================================>] 1.035kB/1.035kB 991de477d40a Extracting [==================================================>] 1.035kB/1.035kB f3b09c502777 Downloading [===========================> ] 30.82MB/56.52MB 6ac0e4adf315 Extracting [======> ] 8.356MB/62.07MB 2d429b9e73a6 Downloading [===> ] 1.768MB/29.13MB c4d302cc468d Downloading [===================================> ] 3.194MB/4.534MB 55f2b468da67 Extracting [==> ] 13.93MB/257.9MB 991de477d40a Pull complete 5efc16ba9cdc Extracting [==================================================>] 19.52kB/19.52kB 5efc16ba9cdc Extracting [==================================================>] 19.52kB/19.52kB c4d302cc468d Verifying Checksum c4d302cc468d Download complete f3b09c502777 Downloading [================================> ] 36.76MB/56.52MB 6ac0e4adf315 Extracting [=========> ] 12.26MB/62.07MB 2d429b9e73a6 Downloading [=====> ] 2.948MB/29.13MB 01e0882c90d9 Downloading [> ] 15.3kB/1.447MB 55f2b468da67 Extracting [====> ] 21.17MB/257.9MB f3b09c502777 Downloading [=====================================> ] 42.71MB/56.52MB 6ac0e4adf315 Extracting [=============> ] 16.15MB/62.07MB 01e0882c90d9 Downloading [===============================================> ] 1.375MB/1.447MB 01e0882c90d9 Verifying Checksum 01e0882c90d9 Download complete 5efc16ba9cdc Pull complete 2d429b9e73a6 Downloading [=======> ] 4.128MB/29.13MB 55f2b468da67 Extracting [====> ] 23.95MB/257.9MB policy-db-migrator Pulled f3b09c502777 Downloading [===========================================> ] 48.66MB/56.52MB 6ac0e4adf315 Extracting [==================> ] 23.4MB/62.07MB 531ee2cf3c0c Downloading [> ] 80.83kB/8.066MB 55f2b468da67 Extracting [=====> ] 26.74MB/257.9MB 2d429b9e73a6 Downloading [=========> ] 5.307MB/29.13MB f3b09c502777 Downloading [================================================> ] 55.15MB/56.52MB f3b09c502777 Verifying Checksum f3b09c502777 Download complete 6ac0e4adf315 Extracting [===================> ] 24.51MB/62.07MB 55f2b468da67 Extracting [========> ] 43.45MB/257.9MB 531ee2cf3c0c Downloading [======> ] 1.064MB/8.066MB ed54a7dee1d8 Downloading [> ] 15.3kB/1.196MB 2d429b9e73a6 Downloading [============> ] 7.372MB/29.13MB ed54a7dee1d8 Verifying Checksum ed54a7dee1d8 Download complete 12c5c803443f Downloading [==================================================>] 116B/116B 12c5c803443f Verifying Checksum 12c5c803443f Download complete e27c75a98748 Downloading [===============================================> ] 3.011kB/3.144kB e27c75a98748 Downloading [==================================================>] 3.144kB/3.144kB e27c75a98748 Download complete 6ac0e4adf315 Extracting [========================> ] 30.64MB/62.07MB 55f2b468da67 Extracting [==========> ] 55.71MB/257.9MB 531ee2cf3c0c Downloading [=============> ] 2.211MB/8.066MB 2d429b9e73a6 Downloading [================> ] 9.731MB/29.13MB e73cb4a42719 Downloading [> ] 539.6kB/109.1MB 6ac0e4adf315 Extracting [=================================> ] 41.22MB/62.07MB 55f2b468da67 Extracting [============> ] 66.85MB/257.9MB 531ee2cf3c0c Downloading [======================> ] 3.603MB/8.066MB 2d429b9e73a6 Downloading [====================> ] 12.09MB/29.13MB e73cb4a42719 Downloading [==> ] 5.406MB/109.1MB 6ac0e4adf315 Extracting [==============================================> ] 57.38MB/62.07MB 55f2b468da67 Extracting [===============> ] 80.77MB/257.9MB 531ee2cf3c0c Downloading [===============================> ] 5.16MB/8.066MB 2d429b9e73a6 Downloading [=========================> ] 14.74MB/29.13MB e73cb4a42719 Downloading [=====> ] 11.35MB/109.1MB 55f2b468da67 Extracting [==================> ] 94.7MB/257.9MB 6ac0e4adf315 Extracting [==================================================>] 62.07MB/62.07MB 531ee2cf3c0c Downloading [============================================> ] 7.126MB/8.066MB 2d429b9e73a6 Downloading [==============================> ] 17.99MB/29.13MB 531ee2cf3c0c Verifying Checksum 531ee2cf3c0c Download complete 6ac0e4adf315 Pull complete e73cb4a42719 Downloading [========> ] 17.84MB/109.1MB a83b68436f09 Downloading [===============> ] 3.011kB/9.919kB a83b68436f09 Downloading [==================================================>] 9.919kB/9.919kB a83b68436f09 Verifying Checksum a83b68436f09 Download complete 55f2b468da67 Extracting [====================> ] 103.6MB/257.9MB 787d6bee9571 Download complete 2d429b9e73a6 Downloading [=====================================> ] 21.82MB/29.13MB f3b09c502777 Extracting [> ] 557.1kB/56.52MB 13ff0988aaea Downloading [==================================================>] 167B/167B 13ff0988aaea Verifying Checksum 13ff0988aaea Download complete e73cb4a42719 Downloading [===========> ] 24.87MB/109.1MB 4b82842ab819 Downloading [===========================> ] 3.011kB/5.415kB 4b82842ab819 Downloading [==================================================>] 5.415kB/5.415kB 4b82842ab819 Verifying Checksum 4b82842ab819 Download complete 55f2b468da67 Extracting [=====================> ] 109.7MB/257.9MB 7e568a0dc8fb Downloading [==================================================>] 184B/184B 7e568a0dc8fb Verifying Checksum 7e568a0dc8fb Download complete 2d429b9e73a6 Downloading [============================================> ] 25.66MB/29.13MB f3b09c502777 Extracting [====> ] 5.571MB/56.52MB e73cb4a42719 Downloading [==============> ] 32.44MB/109.1MB f18232174bc9 Downloading [> ] 48.06kB/3.642MB 55f2b468da67 Extracting [======================> ] 114.8MB/257.9MB 2d429b9e73a6 Download complete f3b09c502777 Extracting [========> ] 9.47MB/56.52MB e73cb4a42719 Downloading [==================> ] 39.47MB/109.1MB 65babbe3dfe5 Downloading [==================================================>] 141B/141B 65babbe3dfe5 Download complete f18232174bc9 Downloading [=================================> ] 2.407MB/3.642MB 55f2b468da67 Extracting [=======================> ] 119.2MB/257.9MB f18232174bc9 Verifying Checksum f18232174bc9 Download complete f18232174bc9 Extracting [> ] 65.54kB/3.642MB 651b0ba49b07 Downloading [> ] 48.06kB/3.524MB 2d429b9e73a6 Extracting [> ] 294.9kB/29.13MB f3b09c502777 Extracting [==========> ] 12.26MB/56.52MB e73cb4a42719 Downloading [=====================> ] 47.58MB/109.1MB d953cde4314b Downloading [> ] 97.22kB/8.735MB 55f2b468da67 Extracting [=======================> ] 122.6MB/257.9MB f18232174bc9 Extracting [=====> ] 393.2kB/3.642MB 651b0ba49b07 Downloading [===========================> ] 1.916MB/3.524MB 2d429b9e73a6 Extracting [=========> ] 5.603MB/29.13MB f3b09c502777 Extracting [=============> ] 15.04MB/56.52MB e73cb4a42719 Downloading [=========================> ] 55.15MB/109.1MB d953cde4314b Downloading [===========> ] 2.063MB/8.735MB 651b0ba49b07 Downloading [==================================================>] 3.524MB/3.524MB 651b0ba49b07 Download complete f18232174bc9 Extracting [==================================================>] 3.642MB/3.642MB f18232174bc9 Extracting [==================================================>] 3.642MB/3.642MB 55f2b468da67 Extracting [========================> ] 126.5MB/257.9MB 2d429b9e73a6 Extracting [================> ] 9.437MB/29.13MB aecd4cb03450 Downloading [==> ] 3.01kB/58.08kB f18232174bc9 Pull complete 65babbe3dfe5 Extracting [==================================================>] 141B/141B 65babbe3dfe5 Extracting [==================================================>] 141B/141B aecd4cb03450 Download complete e73cb4a42719 Downloading [=============================> ] 63.26MB/109.1MB f3b09c502777 Extracting [================> ] 18.38MB/56.52MB d953cde4314b Downloading [===========================> ] 4.718MB/8.735MB 13fa68ca8757 Downloading [=====> ] 3.01kB/27.77kB 13fa68ca8757 Download complete 55f2b468da67 Extracting [=========================> ] 130.9MB/257.9MB 2d429b9e73a6 Extracting [====================> ] 11.8MB/29.13MB e73cb4a42719 Downloading [================================> ] 71.37MB/109.1MB d953cde4314b Downloading [==========================================> ] 7.47MB/8.735MB 65babbe3dfe5 Pull complete f3b09c502777 Extracting [=================> ] 20.05MB/56.52MB 651b0ba49b07 Extracting [> ] 65.54kB/3.524MB 55f2b468da67 Extracting [==========================> ] 136.5MB/257.9MB d953cde4314b Verifying Checksum d953cde4314b Download complete 2d429b9e73a6 Extracting [============================> ] 16.81MB/29.13MB f836d47fdc4d Downloading [> ] 539.6kB/107.3MB e73cb4a42719 Downloading [====================================> ] 80.02MB/109.1MB 651b0ba49b07 Extracting [====> ] 327.7kB/3.524MB f3b09c502777 Extracting [=======================> ] 26.18MB/56.52MB 55f2b468da67 Extracting [===========================> ] 139.8MB/257.9MB 2d429b9e73a6 Extracting [====================================> ] 21.23MB/29.13MB 8b5292c940e1 Downloading [> ] 539.6kB/63.48MB f836d47fdc4d Downloading [=> ] 3.243MB/107.3MB e73cb4a42719 Downloading [=========================================> ] 89.75MB/109.1MB 651b0ba49b07 Extracting [=================================================> ] 3.473MB/3.524MB 651b0ba49b07 Extracting [==================================================>] 3.524MB/3.524MB f3b09c502777 Extracting [===========================> ] 31.2MB/56.52MB 651b0ba49b07 Extracting [==================================================>] 3.524MB/3.524MB 55f2b468da67 Extracting [===========================> ] 143.2MB/257.9MB 2d429b9e73a6 Extracting [=========================================> ] 23.89MB/29.13MB f836d47fdc4d Downloading [==> ] 5.406MB/107.3MB 8b5292c940e1 Downloading [==> ] 2.702MB/63.48MB e73cb4a42719 Downloading [=============================================> ] 98.4MB/109.1MB f3b09c502777 Extracting [=====================================> ] 42.34MB/56.52MB e73cb4a42719 Downloading [================================================> ] 104.9MB/109.1MB f836d47fdc4d Downloading [===> ] 7.568MB/107.3MB 8b5292c940e1 Downloading [===> ] 4.865MB/63.48MB 2d429b9e73a6 Extracting [==========================================> ] 24.77MB/29.13MB 55f2b468da67 Extracting [============================> ] 147.1MB/257.9MB e73cb4a42719 Verifying Checksum e73cb4a42719 Download complete f3b09c502777 Extracting [=========================================> ] 47.35MB/56.52MB 454a4350d439 Downloading [============> ] 3.01kB/11.93kB 454a4350d439 Download complete 8b5292c940e1 Downloading [=====> ] 7.568MB/63.48MB f836d47fdc4d Downloading [=====> ] 11.35MB/107.3MB 651b0ba49b07 Pull complete 2d429b9e73a6 Extracting [==============================================> ] 27.13MB/29.13MB 9a8c18aee5ea Download complete f3b09c502777 Extracting [=============================================> ] 51.25MB/56.52MB 55f2b468da67 Extracting [============================> ] 149.3MB/257.9MB 8b5292c940e1 Downloading [=========> ] 11.89MB/63.48MB f836d47fdc4d Downloading [=======> ] 16.76MB/107.3MB f3b09c502777 Extracting [================================================> ] 54.59MB/56.52MB 2d429b9e73a6 Extracting [================================================> ] 28.02MB/29.13MB eca0188f477e Downloading [> ] 375.7kB/37.17MB d953cde4314b Extracting [> ] 98.3kB/8.735MB 55f2b468da67 Extracting [=============================> ] 152.6MB/257.9MB 8b5292c940e1 Downloading [===========> ] 14.6MB/63.48MB f836d47fdc4d Downloading [=========> ] 20MB/107.3MB eca0188f477e Downloading [=====> ] 4.144MB/37.17MB f3b09c502777 Extracting [=================================================> ] 56.26MB/56.52MB 2d429b9e73a6 Extracting [================================================> ] 28.31MB/29.13MB 55f2b468da67 Extracting [==============================> ] 155.4MB/257.9MB d953cde4314b Extracting [==> ] 393.2kB/8.735MB f3b09c502777 Extracting [==================================================>] 56.52MB/56.52MB 8b5292c940e1 Downloading [=============> ] 17.3MB/63.48MB 2d429b9e73a6 Extracting [==================================================>] 29.13MB/29.13MB f836d47fdc4d Downloading [==========> ] 23.25MB/107.3MB eca0188f477e Downloading [==========> ] 7.912MB/37.17MB f836d47fdc4d Downloading [===========> ] 23.79MB/107.3MB eca0188f477e Downloading [===========> ] 8.666MB/37.17MB 8b5292c940e1 Downloading [==============> ] 18.92MB/63.48MB d953cde4314b Extracting [===========> ] 2.064MB/8.735MB 55f2b468da67 Extracting [==============================> ] 157.6MB/257.9MB f3b09c502777 Pull complete 408012a7b118 Extracting [==================================================>] 637B/637B 408012a7b118 Extracting [==================================================>] 637B/637B 2d429b9e73a6 Pull complete 46eab5b44a35 Extracting [==================================================>] 1.168kB/1.168kB 46eab5b44a35 Extracting [==================================================>] 1.168kB/1.168kB 8b5292c940e1 Downloading [====================> ] 26.49MB/63.48MB f836d47fdc4d Downloading [==============> ] 31.36MB/107.3MB d953cde4314b Extracting [============================> ] 5.014MB/8.735MB eca0188f477e Downloading [=====================> ] 16.2MB/37.17MB 55f2b468da67 Extracting [===============================> ] 161MB/257.9MB d953cde4314b Extracting [==============================================> ] 8.159MB/8.735MB 8b5292c940e1 Downloading [=======================> ] 30.28MB/63.48MB f836d47fdc4d Downloading [================> ] 35.68MB/107.3MB eca0188f477e Downloading [===========================> ] 20.35MB/37.17MB d953cde4314b Extracting [==================================================>] 8.735MB/8.735MB 55f2b468da67 Extracting [================================> ] 166MB/257.9MB 46eab5b44a35 Pull complete 8b5292c940e1 Downloading [==========================> ] 33.52MB/63.48MB 408012a7b118 Pull complete d953cde4314b Pull complete c4d302cc468d Extracting [> ] 65.54kB/4.534MB aecd4cb03450 Extracting [============================> ] 32.77kB/58.08kB aecd4cb03450 Extracting [==================================================>] 58.08kB/58.08kB 44986281b8b9 Extracting [==================================================>] 4.022kB/4.022kB f836d47fdc4d Downloading [==================> ] 38.93MB/107.3MB 44986281b8b9 Extracting [==================================================>] 4.022kB/4.022kB eca0188f477e Downloading [==============================> ] 22.99MB/37.17MB 55f2b468da67 Extracting [================================> ] 169.3MB/257.9MB c4d302cc468d Extracting [==========> ] 983kB/4.534MB 8b5292c940e1 Downloading [==============================> ] 38.39MB/63.48MB f836d47fdc4d Downloading [===================> ] 42.17MB/107.3MB eca0188f477e Downloading [==================================> ] 26MB/37.17MB 55f2b468da67 Extracting [=================================> ] 171MB/257.9MB f836d47fdc4d Downloading [====================> ] 44.87MB/107.3MB 8b5292c940e1 Downloading [=================================> ] 42.17MB/63.48MB eca0188f477e Downloading [======================================> ] 28.26MB/37.17MB c4d302cc468d Extracting [==================================================>] 4.534MB/4.534MB 55f2b468da67 Extracting [=================================> ] 172.1MB/257.9MB aecd4cb03450 Pull complete 44986281b8b9 Pull complete 13fa68ca8757 Extracting [==================================================>] 27.77kB/27.77kB 13fa68ca8757 Extracting [==================================================>] 27.77kB/27.77kB bf70c5107ab5 Extracting [==================================================>] 1.44kB/1.44kB bf70c5107ab5 Extracting [==================================================>] 1.44kB/1.44kB c4d302cc468d Pull complete 01e0882c90d9 Extracting [=> ] 32.77kB/1.447MB 8b5292c940e1 Downloading [=====================================> ] 47.04MB/63.48MB f836d47fdc4d Downloading [======================> ] 48.66MB/107.3MB eca0188f477e Downloading [===========================================> ] 32.41MB/37.17MB 55f2b468da67 Extracting [=================================> ] 173.8MB/257.9MB f836d47fdc4d Downloading [=======================> ] 51.36MB/107.3MB 8b5292c940e1 Downloading [=======================================> ] 50.28MB/63.48MB eca0188f477e Downloading [===============================================> ] 35.04MB/37.17MB 01e0882c90d9 Extracting [==========> ] 294.9kB/1.447MB 13fa68ca8757 Pull complete bf70c5107ab5 Pull complete 1ccde423731d Extracting [==================================================>] 61.44kB/61.44kB 01e0882c90d9 Extracting [==================================================>] 1.447MB/1.447MB eca0188f477e Verifying Checksum eca0188f477e Download complete 01e0882c90d9 Pull complete 55f2b468da67 Extracting [=================================> ] 174.9MB/257.9MB 531ee2cf3c0c Extracting [> ] 98.3kB/8.066MB f836d47fdc4d Downloading [=========================> ] 54.07MB/107.3MB 8b5292c940e1 Downloading [==========================================> ] 54.07MB/63.48MB e444bcd4d577 Downloading [==================================================>] 279B/279B e444bcd4d577 Verifying Checksum e444bcd4d577 Download complete eca0188f477e Extracting [> ] 393.2kB/37.17MB eabd8714fec9 Downloading [> ] 539.6kB/375MB f836d47fdc4d Downloading [==========================> ] 57.31MB/107.3MB 8b5292c940e1 Downloading [=============================================> ] 58.39MB/63.48MB 531ee2cf3c0c Extracting [============> ] 2.064MB/8.066MB 55f2b468da67 Extracting [==================================> ] 177.1MB/257.9MB 1ccde423731d Pull complete 7221d93db8a9 Extracting [==================================================>] 100B/100B 7221d93db8a9 Extracting [==================================================>] 100B/100B eca0188f477e Extracting [===> ] 2.359MB/37.17MB eabd8714fec9 Downloading [> ] 3.243MB/375MB 8b5292c940e1 Downloading [================================================> ] 62.18MB/63.48MB f836d47fdc4d Downloading [============================> ] 60.55MB/107.3MB 531ee2cf3c0c Extracting [==========================> ] 4.227MB/8.066MB 8b5292c940e1 Verifying Checksum 8b5292c940e1 Download complete 55f2b468da67 Extracting [==================================> ] 179.9MB/257.9MB 45fd2fec8a19 Downloading [==================================================>] 1.103kB/1.103kB 45fd2fec8a19 Verifying Checksum 45fd2fec8a19 Download complete eca0188f477e Extracting [========> ] 6.291MB/37.17MB eabd8714fec9 Downloading [> ] 5.946MB/375MB 531ee2cf3c0c Extracting [==================================> ] 5.603MB/8.066MB f836d47fdc4d Downloading [=============================> ] 63.8MB/107.3MB 8f10199ed94b Downloading [> ] 97.22kB/8.768MB 55f2b468da67 Extracting [===================================> ] 184.4MB/257.9MB eca0188f477e Extracting [============> ] 9.437MB/37.17MB 531ee2cf3c0c Extracting [==================================================>] 8.066MB/8.066MB eabd8714fec9 Downloading [=> ] 9.19MB/375MB f836d47fdc4d Downloading [===============================> ] 66.5MB/107.3MB 8f10199ed94b Downloading [=============> ] 2.358MB/8.768MB 55f2b468da67 Extracting [====================================> ] 189.4MB/257.9MB eca0188f477e Extracting [=================> ] 13.37MB/37.17MB 8f10199ed94b Downloading [=======================> ] 4.128MB/8.768MB f836d47fdc4d Downloading [================================> ] 68.66MB/107.3MB eabd8714fec9 Downloading [=> ] 11.89MB/375MB 55f2b468da67 Extracting [=====================================> ] 191.6MB/257.9MB 531ee2cf3c0c Pull complete eca0188f477e Extracting [===================> ] 14.16MB/37.17MB ed54a7dee1d8 Extracting [=> ] 32.77kB/1.196MB 7221d93db8a9 Pull complete 7df673c7455d Extracting [==================================================>] 694B/694B 7df673c7455d Extracting [==================================================>] 694B/694B 8f10199ed94b Verifying Checksum 8f10199ed94b Download complete f836d47fdc4d Downloading [==================================> ] 74.61MB/107.3MB ed54a7dee1d8 Extracting [==========================> ] 622.6kB/1.196MB f963a77d2726 Downloading [=======> ] 3.01kB/21.44kB eabd8714fec9 Downloading [==> ] 20MB/375MB f963a77d2726 Download complete 55f2b468da67 Extracting [=====================================> ] 194.4MB/257.9MB eca0188f477e Extracting [========================> ] 18.09MB/37.17MB ed54a7dee1d8 Extracting [==================================================>] 1.196MB/1.196MB ed54a7dee1d8 Extracting [==================================================>] 1.196MB/1.196MB f3a82e9f1761 Downloading [> ] 457.7kB/44.41MB f836d47fdc4d Downloading [====================================> ] 77.32MB/107.3MB eca0188f477e Extracting [==============================> ] 22.81MB/37.17MB eabd8714fec9 Downloading [===> ] 24.33MB/375MB 55f2b468da67 Extracting [======================================> ] 196.1MB/257.9MB ed54a7dee1d8 Pull complete 7df673c7455d Pull complete 12c5c803443f Extracting [==================================================>] 116B/116B 12c5c803443f Extracting [==================================================>] 116B/116B f3a82e9f1761 Downloading [=====> ] 5.045MB/44.41MB eca0188f477e Extracting [====================================> ] 27.13MB/37.17MB eabd8714fec9 Downloading [====> ] 32.44MB/375MB f836d47fdc4d Downloading [======================================> ] 82.18MB/107.3MB 55f2b468da67 Extracting [======================================> ] 197.2MB/257.9MB f3a82e9f1761 Downloading [===========> ] 10.09MB/44.41MB eca0188f477e Extracting [=========================================> ] 30.67MB/37.17MB eabd8714fec9 Downloading [=====> ] 41.09MB/375MB f836d47fdc4d Downloading [========================================> ] 85.97MB/107.3MB 55f2b468da67 Extracting [======================================> ] 199.4MB/257.9MB eca0188f477e Extracting [============================================> ] 33.03MB/37.17MB f3a82e9f1761 Downloading [=================> ] 15.6MB/44.41MB eabd8714fec9 Downloading [======> ] 50.28MB/375MB f836d47fdc4d Downloading [==========================================> ] 90.29MB/107.3MB prometheus Pulled eabd8714fec9 Downloading [========> ] 61.09MB/375MB f3a82e9f1761 Downloading [=======================> ] 21.1MB/44.41MB eca0188f477e Extracting [=============================================> ] 33.82MB/37.17MB 55f2b468da67 Extracting [======================================> ] 200MB/257.9MB f836d47fdc4d Downloading [============================================> ] 95.7MB/107.3MB f3a82e9f1761 Downloading [=============================> ] 26.61MB/44.41MB eabd8714fec9 Downloading [=========> ] 71.91MB/375MB 55f2b468da67 Extracting [=======================================> ] 202.2MB/257.9MB eca0188f477e Extracting [================================================> ] 35.78MB/37.17MB f836d47fdc4d Downloading [==============================================> ] 99.48MB/107.3MB eabd8714fec9 Downloading [==========> ] 82.18MB/375MB f3a82e9f1761 Downloading [====================================> ] 32.11MB/44.41MB 55f2b468da67 Extracting [=======================================> ] 202.8MB/257.9MB f836d47fdc4d Downloading [===============================================> ] 102.7MB/107.3MB 12c5c803443f Pull complete e27c75a98748 Extracting [==================================================>] 3.144kB/3.144kB e27c75a98748 Extracting [==================================================>] 3.144kB/3.144kB eabd8714fec9 Downloading [============> ] 90.29MB/375MB f3a82e9f1761 Downloading [=========================================> ] 37.16MB/44.41MB eca0188f477e Extracting [================================================> ] 36.18MB/37.17MB eca0188f477e Extracting [==================================================>] 37.17MB/37.17MB 55f2b468da67 Extracting [=======================================> ] 203.9MB/257.9MB f836d47fdc4d Downloading [=================================================> ] 106MB/107.3MB eabd8714fec9 Downloading [=============> ] 103.3MB/375MB f3a82e9f1761 Downloading [=================================================> ] 43.58MB/44.41MB f3a82e9f1761 Verifying Checksum f3a82e9f1761 Download complete eca0188f477e Pull complete f836d47fdc4d Download complete e444bcd4d577 Extracting [==================================================>] 279B/279B e444bcd4d577 Extracting [==================================================>] 279B/279B eabd8714fec9 Downloading [==============> ] 107.6MB/375MB 55f2b468da67 Extracting [=======================================> ] 205.6MB/257.9MB e27c75a98748 Pull complete 79161a3f5362 Downloading [================================> ] 3.011kB/4.656kB 9c266ba63f51 Downloading [==================================================>] 1.105kB/1.105kB 79161a3f5362 Download complete 9c266ba63f51 Verifying Checksum 9c266ba63f51 Download complete 10f05dd8b1db Downloading [==================================================>] 98B/98B 10f05dd8b1db Verifying Checksum 10f05dd8b1db Download complete 2e8a7df9c2ee Downloading [==================================================>] 851B/851B 2e8a7df9c2ee Download complete f836d47fdc4d Extracting [> ] 557.1kB/107.3MB 41dac8b43ba6 Downloading [==================================================>] 171B/171B 41dac8b43ba6 Verifying Checksum 41dac8b43ba6 Download complete eabd8714fec9 Downloading [================> ] 121.1MB/375MB 71a9f6a9ab4d Downloading [> ] 3.009kB/230.6kB e444bcd4d577 Pull complete 71a9f6a9ab4d Downloading [==================================================>] 230.6kB/230.6kB 71a9f6a9ab4d Verifying Checksum 71a9f6a9ab4d Download complete e73cb4a42719 Extracting [> ] 557.1kB/109.1MB 55f2b468da67 Extracting [========================================> ] 207.2MB/257.9MB f836d47fdc4d Extracting [=> ] 2.785MB/107.3MB c955f6e31a04 Downloading [===========================================> ] 3.011kB/3.446kB c955f6e31a04 Downloading [==================================================>] 3.446kB/3.446kB c955f6e31a04 Verifying Checksum c955f6e31a04 Download complete da3ed5db7103 Downloading [> ] 539.6kB/127.4MB eabd8714fec9 Downloading [=================> ] 133MB/375MB e73cb4a42719 Extracting [==> ] 5.571MB/109.1MB 55f2b468da67 Extracting [========================================> ] 208.9MB/257.9MB f836d47fdc4d Extracting [==> ] 5.571MB/107.3MB eabd8714fec9 Downloading [===================> ] 146.5MB/375MB da3ed5db7103 Downloading [=> ] 3.784MB/127.4MB e73cb4a42719 Extracting [====> ] 8.913MB/109.1MB eabd8714fec9 Downloading [=====================> ] 159MB/375MB 55f2b468da67 Extracting [========================================> ] 211.1MB/257.9MB f836d47fdc4d Extracting [====> ] 8.913MB/107.3MB da3ed5db7103 Downloading [==> ] 7.028MB/127.4MB e73cb4a42719 Extracting [=====> ] 11.14MB/109.1MB eabd8714fec9 Downloading [======================> ] 172.5MB/375MB 55f2b468da67 Extracting [=========================================> ] 212.2MB/257.9MB da3ed5db7103 Downloading [====> ] 10.81MB/127.4MB f836d47fdc4d Extracting [=====> ] 12.26MB/107.3MB e73cb4a42719 Extracting [=======> ] 15.6MB/109.1MB eabd8714fec9 Downloading [========================> ] 185.4MB/375MB 55f2b468da67 Extracting [=========================================> ] 214.5MB/257.9MB da3ed5db7103 Downloading [=====> ] 14.6MB/127.4MB f836d47fdc4d Extracting [=======> ] 15.04MB/107.3MB e73cb4a42719 Extracting [========> ] 19.5MB/109.1MB eabd8714fec9 Downloading [==========================> ] 197.3MB/375MB 55f2b468da67 Extracting [==========================================> ] 217.3MB/257.9MB da3ed5db7103 Downloading [======> ] 17.84MB/127.4MB e73cb4a42719 Extracting [==========> ] 23.4MB/109.1MB eabd8714fec9 Downloading [============================> ] 211.4MB/375MB f836d47fdc4d Extracting [========> ] 17.27MB/107.3MB 55f2b468da67 Extracting [==========================================> ] 220.6MB/257.9MB da3ed5db7103 Downloading [========> ] 21.09MB/127.4MB eabd8714fec9 Downloading [=============================> ] 224.9MB/375MB e73cb4a42719 Extracting [============> ] 26.74MB/109.1MB f836d47fdc4d Extracting [========> ] 18.38MB/107.3MB 55f2b468da67 Extracting [===========================================> ] 223.4MB/257.9MB da3ed5db7103 Downloading [=========> ] 24.33MB/127.4MB eabd8714fec9 Downloading [===============================> ] 235.2MB/375MB e73cb4a42719 Extracting [==============> ] 31.75MB/109.1MB 55f2b468da67 Extracting [===========================================> ] 226.2MB/257.9MB f836d47fdc4d Extracting [==========> ] 21.73MB/107.3MB da3ed5db7103 Downloading [===========> ] 28.65MB/127.4MB eabd8714fec9 Downloading [=================================> ] 248.7MB/375MB e73cb4a42719 Extracting [=================> ] 37.32MB/109.1MB f836d47fdc4d Extracting [============> ] 26.18MB/107.3MB 55f2b468da67 Extracting [============================================> ] 227.8MB/257.9MB da3ed5db7103 Downloading [============> ] 31.9MB/127.4MB eabd8714fec9 Downloading [==================================> ] 261.1MB/375MB e73cb4a42719 Extracting [===================> ] 42.34MB/109.1MB 55f2b468da67 Extracting [============================================> ] 229.5MB/257.9MB f836d47fdc4d Extracting [=============> ] 29.52MB/107.3MB da3ed5db7103 Downloading [=============> ] 35.14MB/127.4MB eabd8714fec9 Downloading [====================================> ] 277.4MB/375MB e73cb4a42719 Extracting [=====================> ] 46.79MB/109.1MB f836d47fdc4d Extracting [================> ] 35.09MB/107.3MB 55f2b468da67 Extracting [============================================> ] 231.7MB/257.9MB eabd8714fec9 Downloading [=======================================> ] 294.1MB/375MB da3ed5db7103 Downloading [===============> ] 39.47MB/127.4MB e73cb4a42719 Extracting [=======================> ] 50.69MB/109.1MB f836d47fdc4d Extracting [=================> ] 38.44MB/107.3MB eabd8714fec9 Downloading [=========================================> ] 308.7MB/375MB 55f2b468da67 Extracting [=============================================> ] 232.8MB/257.9MB da3ed5db7103 Downloading [================> ] 43.25MB/127.4MB e73cb4a42719 Extracting [========================> ] 52.36MB/109.1MB f836d47fdc4d Extracting [===================> ] 41.78MB/107.3MB eabd8714fec9 Downloading [===========================================> ] 323.3MB/375MB da3ed5db7103 Downloading [==================> ] 47.58MB/127.4MB 55f2b468da67 Extracting [=============================================> ] 236.2MB/257.9MB e73cb4a42719 Extracting [=========================> ] 54.59MB/109.1MB f836d47fdc4d Extracting [=====================> ] 46.24MB/107.3MB eabd8714fec9 Downloading [=============================================> ] 338.5MB/375MB da3ed5db7103 Downloading [====================> ] 51.36MB/127.4MB 55f2b468da67 Extracting [==============================================> ] 240.6MB/257.9MB e73cb4a42719 Extracting [=========================> ] 56.26MB/109.1MB f836d47fdc4d Extracting [=======================> ] 50.14MB/107.3MB eabd8714fec9 Downloading [==============================================> ] 352MB/375MB da3ed5db7103 Downloading [======================> ] 56.23MB/127.4MB e73cb4a42719 Extracting [===========================> ] 59.05MB/109.1MB f836d47fdc4d Extracting [=========================> ] 54.03MB/107.3MB eabd8714fec9 Downloading [================================================> ] 366.6MB/375MB eabd8714fec9 Download complete da3ed5db7103 Downloading [========================> ] 61.64MB/127.4MB 55f2b468da67 Extracting [===============================================> ] 244.5MB/257.9MB e73cb4a42719 Extracting [============================> ] 62.39MB/109.1MB f836d47fdc4d Extracting [==========================> ] 57.38MB/107.3MB eabd8714fec9 Extracting [> ] 557.1kB/375MB 55f2b468da67 Extracting [================================================> ] 251.2MB/257.9MB da3ed5db7103 Downloading [==========================> ] 68.12MB/127.4MB e73cb4a42719 Extracting [==============================> ] 67.4MB/109.1MB f836d47fdc4d Extracting [============================> ] 61.83MB/107.3MB eabd8714fec9 Extracting [=> ] 12.81MB/375MB 55f2b468da67 Extracting [=================================================> ] 254MB/257.9MB e73cb4a42719 Extracting [================================> ] 71.86MB/109.1MB da3ed5db7103 Downloading [============================> ] 73.53MB/127.4MB f836d47fdc4d Extracting [==============================> ] 64.62MB/107.3MB 55f2b468da67 Extracting [=================================================> ] 257.4MB/257.9MB eabd8714fec9 Extracting [==> ] 20.61MB/375MB e73cb4a42719 Extracting [==================================> ] 75.2MB/109.1MB da3ed5db7103 Downloading [==============================> ] 78.94MB/127.4MB 55f2b468da67 Extracting [==================================================>] 257.9MB/257.9MB 55f2b468da67 Extracting [==================================================>] 257.9MB/257.9MB f836d47fdc4d Extracting [===============================> ] 67.96MB/107.3MB da3ed5db7103 Downloading [================================> ] 83.8MB/127.4MB e73cb4a42719 Extracting [===================================> ] 77.99MB/109.1MB f836d47fdc4d Extracting [=================================> ] 71.86MB/107.3MB eabd8714fec9 Extracting [===> ] 23.95MB/375MB da3ed5db7103 Downloading [===================================> ] 89.21MB/127.4MB f836d47fdc4d Extracting [===================================> ] 75.2MB/107.3MB e73cb4a42719 Extracting [=====================================> ] 81.33MB/109.1MB eabd8714fec9 Extracting [====> ] 32.87MB/375MB da3ed5db7103 Downloading [====================================> ] 94.08MB/127.4MB f836d47fdc4d Extracting [====================================> ] 77.43MB/107.3MB e73cb4a42719 Extracting [======================================> ] 84.12MB/109.1MB eabd8714fec9 Extracting [=====> ] 44.01MB/375MB da3ed5db7103 Downloading [======================================> ] 97.32MB/127.4MB eabd8714fec9 Extracting [======> ] 46.79MB/375MB f836d47fdc4d Extracting [====================================> ] 79.1MB/107.3MB e73cb4a42719 Extracting [=======================================> ] 85.79MB/109.1MB da3ed5db7103 Downloading [==========================================> ] 108.7MB/127.4MB eabd8714fec9 Extracting [=======> ] 57.93MB/375MB e73cb4a42719 Extracting [=========================================> ] 89.69MB/109.1MB f836d47fdc4d Extracting [=====================================> ] 81.33MB/107.3MB da3ed5db7103 Downloading [===============================================> ] 121.1MB/127.4MB eabd8714fec9 Extracting [========> ] 66.29MB/375MB da3ed5db7103 Verifying Checksum da3ed5db7103 Download complete f836d47fdc4d Extracting [=======================================> ] 85.23MB/107.3MB e73cb4a42719 Extracting [==========================================> ] 91.91MB/109.1MB eabd8714fec9 Extracting [==========> ] 80.77MB/375MB f836d47fdc4d Extracting [==========================================> ] 90.8MB/107.3MB e73cb4a42719 Extracting [===========================================> ] 94.14MB/109.1MB eabd8714fec9 Extracting [============> ] 92.47MB/375MB f836d47fdc4d Extracting [=============================================> ] 98.04MB/107.3MB e73cb4a42719 Extracting [============================================> ] 96.93MB/109.1MB 55f2b468da67 Pull complete eabd8714fec9 Extracting [=============> ] 99.16MB/375MB f836d47fdc4d Extracting [===============================================> ] 100.8MB/107.3MB e73cb4a42719 Extracting [=============================================> ] 98.6MB/109.1MB eabd8714fec9 Extracting [==============> ] 105.3MB/375MB eabd8714fec9 Extracting [==============> ] 106.4MB/375MB e73cb4a42719 Extracting [=============================================> ] 99.71MB/109.1MB 82bfc142787e Extracting [> ] 98.3kB/8.613MB f836d47fdc4d Extracting [================================================> ] 103.1MB/107.3MB eabd8714fec9 Extracting [==============> ] 110.3MB/375MB e73cb4a42719 Extracting [==============================================> ] 102.5MB/109.1MB f836d47fdc4d Extracting [================================================> ] 104.2MB/107.3MB 82bfc142787e Extracting [==> ] 491.5kB/8.613MB eabd8714fec9 Extracting [===============> ] 113.1MB/375MB e73cb4a42719 Extracting [===============================================> ] 104.2MB/109.1MB f836d47fdc4d Extracting [=================================================> ] 105.3MB/107.3MB 82bfc142787e Extracting [========================================> ] 6.98MB/8.613MB 82bfc142787e Extracting [==================================================>] 8.613MB/8.613MB f836d47fdc4d Extracting [==================================================>] 107.3MB/107.3MB eabd8714fec9 Extracting [===============> ] 116.4MB/375MB 82bfc142787e Pull complete f836d47fdc4d Pull complete 46baca71a4ef Extracting [==================================================>] 18.11kB/18.11kB 46baca71a4ef Extracting [==================================================>] 18.11kB/18.11kB e73cb4a42719 Extracting [================================================> ] 105.8MB/109.1MB eabd8714fec9 Extracting [===============> ] 119.8MB/375MB 46baca71a4ef Pull complete eabd8714fec9 Extracting [================> ] 124.2MB/375MB 8b5292c940e1 Extracting [> ] 557.1kB/63.48MB e73cb4a42719 Extracting [=================================================> ] 107.5MB/109.1MB b0e0ef7895f4 Extracting [> ] 393.2kB/37.01MB eabd8714fec9 Extracting [=================> ] 127.6MB/375MB 8b5292c940e1 Extracting [> ] 1.114MB/63.48MB b0e0ef7895f4 Extracting [=======> ] 5.505MB/37.01MB e73cb4a42719 Extracting [==================================================>] 109.1MB/109.1MB eabd8714fec9 Extracting [=================> ] 131.5MB/375MB 8b5292c940e1 Extracting [=> ] 1.671MB/63.48MB b0e0ef7895f4 Extracting [==================> ] 13.37MB/37.01MB eabd8714fec9 Extracting [=================> ] 134.8MB/375MB b0e0ef7895f4 Extracting [============================> ] 21.23MB/37.01MB eabd8714fec9 Extracting [==================> ] 137.6MB/375MB 8b5292c940e1 Extracting [=> ] 2.228MB/63.48MB b0e0ef7895f4 Extracting [========================================> ] 29.88MB/37.01MB eabd8714fec9 Extracting [==================> ] 141.5MB/375MB 8b5292c940e1 Extracting [==> ] 2.785MB/63.48MB b0e0ef7895f4 Extracting [==================================================>] 37.01MB/37.01MB eabd8714fec9 Extracting [===================> ] 144.8MB/375MB 8b5292c940e1 Extracting [===> ] 4.456MB/63.48MB eabd8714fec9 Extracting [===================> ] 149.3MB/375MB 8b5292c940e1 Extracting [===> ] 5.014MB/63.48MB eabd8714fec9 Extracting [====================> ] 152.6MB/375MB 8b5292c940e1 Extracting [====> ] 5.571MB/63.48MB eabd8714fec9 Extracting [====================> ] 156.5MB/375MB 8b5292c940e1 Extracting [======> ] 7.799MB/63.48MB e73cb4a42719 Pull complete eabd8714fec9 Extracting [=====================> ] 158.8MB/375MB 8b5292c940e1 Extracting [=======> ] 8.913MB/63.48MB eabd8714fec9 Extracting [=====================> ] 164.3MB/375MB 8b5292c940e1 Extracting [========> ] 10.58MB/63.48MB eabd8714fec9 Extracting [======================> ] 171.6MB/375MB 8b5292c940e1 Extracting [==========> ] 12.81MB/63.48MB eabd8714fec9 Extracting [========================> ] 187.2MB/375MB b0e0ef7895f4 Pull complete 8b5292c940e1 Extracting [============> ] 16.15MB/63.48MB a83b68436f09 Extracting [==================================================>] 9.919kB/9.919kB a83b68436f09 Extracting [==================================================>] 9.919kB/9.919kB eabd8714fec9 Extracting [=========================> ] 195MB/375MB 8b5292c940e1 Extracting [=============> ] 16.71MB/63.48MB eabd8714fec9 Extracting [===========================> ] 204.4MB/375MB 8b5292c940e1 Extracting [==============> ] 18.94MB/63.48MB eabd8714fec9 Extracting [============================> ] 215MB/375MB 8b5292c940e1 Extracting [================> ] 20.61MB/63.48MB eabd8714fec9 Extracting [=============================> ] 218.9MB/375MB 8b5292c940e1 Extracting [==================> ] 23.4MB/63.48MB eabd8714fec9 Extracting [=============================> ] 223.4MB/375MB 8b5292c940e1 Extracting [=====================> ] 27.3MB/63.48MB eabd8714fec9 Extracting [==============================> ] 228.4MB/375MB c0c90eeb8aca Extracting [==================================================>] 1.105kB/1.105kB c0c90eeb8aca Extracting [==================================================>] 1.105kB/1.105kB eabd8714fec9 Extracting [==============================> ] 232.3MB/375MB 8b5292c940e1 Extracting [========================> ] 30.64MB/63.48MB eabd8714fec9 Extracting [===============================> ] 236.7MB/375MB 8b5292c940e1 Extracting [=========================> ] 32.87MB/63.48MB eabd8714fec9 Extracting [================================> ] 241.8MB/375MB 8b5292c940e1 Extracting [============================> ] 36.21MB/63.48MB eabd8714fec9 Extracting [================================> ] 247.3MB/375MB 8b5292c940e1 Extracting [===============================> ] 40.11MB/63.48MB eabd8714fec9 Extracting [=================================> ] 250.7MB/375MB 8b5292c940e1 Extracting [==================================> ] 44.01MB/63.48MB eabd8714fec9 Extracting [==================================> ] 255.1MB/375MB 8b5292c940e1 Extracting [======================================> ] 48.46MB/63.48MB eabd8714fec9 Extracting [==================================> ] 259.6MB/375MB 8b5292c940e1 Extracting [========================================> ] 51.81MB/63.48MB eabd8714fec9 Extracting [===================================> ] 264MB/375MB 8b5292c940e1 Extracting [=========================================> ] 52.92MB/63.48MB a83b68436f09 Pull complete c0c90eeb8aca Pull complete 787d6bee9571 Extracting [==================================================>] 127B/127B 787d6bee9571 Extracting [==================================================>] 127B/127B 5cfb27c10ea5 Extracting [==================================================>] 852B/852B 5cfb27c10ea5 Extracting [==================================================>] 852B/852B eabd8714fec9 Extracting [===================================> ] 267.9MB/375MB 8b5292c940e1 Extracting [==============================================> ] 59.05MB/63.48MB eabd8714fec9 Extracting [===================================> ] 269.1MB/375MB 787d6bee9571 Pull complete 13ff0988aaea Extracting [==================================================>] 167B/167B 13ff0988aaea Extracting [==================================================>] 167B/167B 5cfb27c10ea5 Pull complete 40a5eed61bb0 Extracting [==================================================>] 98B/98B 40a5eed61bb0 Extracting [==================================================>] 98B/98B 8b5292c940e1 Extracting [==============================================> ] 59.6MB/63.48MB eabd8714fec9 Extracting [====================================> ] 270.2MB/375MB 8b5292c940e1 Extracting [==================================================>] 63.48MB/63.48MB 8b5292c940e1 Extracting [==================================================>] 63.48MB/63.48MB 40a5eed61bb0 Pull complete 13ff0988aaea Pull complete e040ea11fa10 Extracting [==================================================>] 173B/173B e040ea11fa10 Extracting [==================================================>] 173B/173B 4b82842ab819 Extracting [==================================================>] 5.415kB/5.415kB 4b82842ab819 Extracting [==================================================>] 5.415kB/5.415kB eabd8714fec9 Extracting [====================================> ] 271.8MB/375MB 8b5292c940e1 Pull complete e040ea11fa10 Pull complete 4b82842ab819 Pull complete 454a4350d439 Extracting [==================================================>] 11.93kB/11.93kB 454a4350d439 Extracting [==================================================>] 11.93kB/11.93kB 7e568a0dc8fb Extracting [==================================================>] 184B/184B 7e568a0dc8fb Extracting [==================================================>] 184B/184B eabd8714fec9 Extracting [====================================> ] 273MB/375MB 09d5a3f70313 Extracting [> ] 557.1kB/109.2MB eabd8714fec9 Extracting [====================================> ] 274.6MB/375MB 454a4350d439 Pull complete 7e568a0dc8fb Pull complete 9a8c18aee5ea Extracting [==================================================>] 1.227kB/1.227kB 9a8c18aee5ea Extracting [==================================================>] 1.227kB/1.227kB postgres Pulled 09d5a3f70313 Extracting [===> ] 7.799MB/109.2MB eabd8714fec9 Extracting [====================================> ] 277.4MB/375MB 9a8c18aee5ea Pull complete grafana Pulled 09d5a3f70313 Extracting [========> ] 17.83MB/109.2MB eabd8714fec9 Extracting [=====================================> ] 283.5MB/375MB 09d5a3f70313 Extracting [==============> ] 31.75MB/109.2MB eabd8714fec9 Extracting [======================================> ] 290.2MB/375MB 09d5a3f70313 Extracting [=====================> ] 47.91MB/109.2MB eabd8714fec9 Extracting [=======================================> ] 294.1MB/375MB 09d5a3f70313 Extracting [============================> ] 62.95MB/109.2MB 09d5a3f70313 Extracting [====================================> ] 80.22MB/109.2MB eabd8714fec9 Extracting [=======================================> ] 296.4MB/375MB 09d5a3f70313 Extracting [=========================================> ] 91.36MB/109.2MB eabd8714fec9 Extracting [=======================================> ] 299.7MB/375MB 09d5a3f70313 Extracting [===============================================> ] 104.2MB/109.2MB eabd8714fec9 Extracting [========================================> ] 302.5MB/375MB 09d5a3f70313 Extracting [=================================================> ] 108.6MB/109.2MB 09d5a3f70313 Extracting [==================================================>] 109.2MB/109.2MB 09d5a3f70313 Extracting [==================================================>] 109.2MB/109.2MB eabd8714fec9 Extracting [========================================> ] 305.3MB/375MB 09d5a3f70313 Pull complete 356f5c2c843b Extracting [==================================================>] 3.623kB/3.623kB 356f5c2c843b Extracting [==================================================>] 3.623kB/3.623kB eabd8714fec9 Extracting [========================================> ] 306.4MB/375MB 356f5c2c843b Pull complete kafka Pulled eabd8714fec9 Extracting [=========================================> ] 308.6MB/375MB eabd8714fec9 Extracting [=========================================> ] 311.4MB/375MB eabd8714fec9 Extracting [=========================================> ] 313.6MB/375MB eabd8714fec9 Extracting [==========================================> ] 317.5MB/375MB eabd8714fec9 Extracting [==========================================> ] 321.4MB/375MB eabd8714fec9 Extracting [===========================================> ] 324.8MB/375MB eabd8714fec9 Extracting [===========================================> ] 327.5MB/375MB eabd8714fec9 Extracting [===========================================> ] 329.8MB/375MB eabd8714fec9 Extracting [============================================> ] 332MB/375MB eabd8714fec9 Extracting [============================================> ] 334.8MB/375MB eabd8714fec9 Extracting [=============================================> ] 340.4MB/375MB eabd8714fec9 Extracting [=============================================> ] 341.5MB/375MB eabd8714fec9 Extracting [=============================================> ] 342MB/375MB eabd8714fec9 Extracting [=============================================> ] 342.6MB/375MB eabd8714fec9 Extracting [=============================================> ] 344.3MB/375MB eabd8714fec9 Extracting [==============================================> ] 345.9MB/375MB eabd8714fec9 Extracting [==============================================> ] 350.9MB/375MB eabd8714fec9 Extracting [===============================================> ] 357.1MB/375MB eabd8714fec9 Extracting [================================================> ] 362.1MB/375MB eabd8714fec9 Extracting [=================================================> ] 368.2MB/375MB eabd8714fec9 Extracting [=================================================> ] 373.8MB/375MB eabd8714fec9 Extracting [==================================================>] 375MB/375MB eabd8714fec9 Pull complete 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB 45fd2fec8a19 Pull complete 8f10199ed94b Extracting [> ] 98.3kB/8.768MB 8f10199ed94b Extracting [=========================> ] 4.424MB/8.768MB 8f10199ed94b Extracting [==================================================>] 8.768MB/8.768MB 8f10199ed94b Pull complete f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB f963a77d2726 Pull complete f3a82e9f1761 Extracting [> ] 458.8kB/44.41MB f3a82e9f1761 Extracting [==========> ] 9.634MB/44.41MB f3a82e9f1761 Extracting [===========================> ] 24.77MB/44.41MB f3a82e9f1761 Extracting [==============================================> ] 41.29MB/44.41MB f3a82e9f1761 Extracting [==================================================>] 44.41MB/44.41MB f3a82e9f1761 Pull complete 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB 79161a3f5362 Pull complete 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB 9c266ba63f51 Pull complete 2e8a7df9c2ee Extracting [==================================================>] 851B/851B 2e8a7df9c2ee Extracting [==================================================>] 851B/851B 2e8a7df9c2ee Pull complete 10f05dd8b1db Extracting [==================================================>] 98B/98B 10f05dd8b1db Extracting [==================================================>] 98B/98B 10f05dd8b1db Pull complete 41dac8b43ba6 Extracting [==================================================>] 171B/171B 41dac8b43ba6 Extracting [==================================================>] 171B/171B 41dac8b43ba6 Pull complete 71a9f6a9ab4d Extracting [=======> ] 32.77kB/230.6kB 71a9f6a9ab4d Extracting [==================================================>] 230.6kB/230.6kB 71a9f6a9ab4d Pull complete da3ed5db7103 Extracting [> ] 557.1kB/127.4MB da3ed5db7103 Extracting [=====> ] 12.81MB/127.4MB da3ed5db7103 Extracting [==========> ] 26.18MB/127.4MB da3ed5db7103 Extracting [================> ] 42.34MB/127.4MB da3ed5db7103 Extracting [=======================> ] 59.6MB/127.4MB da3ed5db7103 Extracting [=============================> ] 76.32MB/127.4MB da3ed5db7103 Extracting [====================================> ] 94.14MB/127.4MB da3ed5db7103 Extracting [===========================================> ] 111.4MB/127.4MB da3ed5db7103 Extracting [===============================================> ] 121.4MB/127.4MB da3ed5db7103 Extracting [==================================================>] 127.4MB/127.4MB da3ed5db7103 Pull complete c955f6e31a04 Extracting [==================================================>] 3.446kB/3.446kB c955f6e31a04 Extracting [==================================================>] 3.446kB/3.446kB c955f6e31a04 Pull complete zookeeper Pulled Network compose_default Creating Network compose_default Created Container simulator Creating Container prometheus Creating Container postgres Creating Container zookeeper Creating Container prometheus Created Container simulator Created Container grafana Creating Container postgres Created Container policy-db-migrator Creating Container zookeeper Created Container kafka Creating Container grafana Created Container policy-db-migrator Created Container policy-api Creating Container kafka Created Container policy-api Created Container policy-pap Creating Container policy-pap Created Container policy-apex-pdp Creating Container policy-apex-pdp Created Container zookeeper Starting Container simulator Starting Container prometheus Starting Container postgres Starting Container prometheus Started Container grafana Starting Container grafana Started Container zookeeper Started Container kafka Starting Container kafka Started Container postgres Started Container policy-db-migrator Starting Container policy-db-migrator Started Container policy-api Starting Container simulator Started Container policy-api Started Container policy-pap Starting Container policy-pap Started Container policy-apex-pdp Starting Container policy-apex-pdp Started Prometheus server: http://localhost:30259 Grafana server: http://localhost:30269 Waiting 1 minute for policy-pap to start... Checking if REST port 30003 is open on localhost ... IMAGE NAMES STATUS nexus3.onap.org:10001/onap/policy-apex-pdp:4.2.1-SNAPSHOT policy-apex-pdp Up About a minute nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up About a minute nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up About a minute nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up About a minute nexus3.onap.org:10001/grafana/grafana:latest grafana Up About a minute nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up About a minute nexus3.onap.org:10001/library/postgres:16.4 postgres Up About a minute nexus3.onap.org:10001/prom/prometheus:latest prometheus Up About a minute nexus3.onap.org:10001/onap/policy-models-simulator:4.2.1-SNAPSHOT simulator Up About a minute Checking if REST port 30001 is open on localhost ... IMAGE NAMES STATUS nexus3.onap.org:10001/onap/policy-apex-pdp:4.2.1-SNAPSHOT policy-apex-pdp Up About a minute nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up About a minute nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up About a minute nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up About a minute nexus3.onap.org:10001/grafana/grafana:latest grafana Up About a minute nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up About a minute nexus3.onap.org:10001/library/postgres:16.4 postgres Up About a minute nexus3.onap.org:10001/prom/prometheus:latest prometheus Up About a minute nexus3.onap.org:10001/onap/policy-models-simulator:4.2.1-SNAPSHOT simulator Up About a minute Cloning into '/w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests/models'... Building robot framework docker image sha256:657f0e962655e04de5291980dfe213859cf8ca90481f8878cca589881d54b554 top - 14:59:10 up 4 min, 0 users, load average: 2.17, 1.62, 0.71 Tasks: 234 total, 1 running, 155 sleeping, 0 stopped, 0 zombie %Cpu(s): 14.3 us, 3.6 sy, 0.0 ni, 78.8 id, 3.1 wa, 0.0 hi, 0.1 si, 0.1 st total used free shared buff/cache available Mem: 31G 2.7G 20G 28M 8.1G 28G Swap: 1.0G 0B 1.0G IMAGE NAMES STATUS nexus3.onap.org:10001/onap/policy-apex-pdp:4.2.1-SNAPSHOT policy-apex-pdp Up 2 minutes nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up 2 minutes nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up 2 minutes nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up 2 minutes nexus3.onap.org:10001/grafana/grafana:latest grafana Up 2 minutes nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up 2 minutes nexus3.onap.org:10001/library/postgres:16.4 postgres Up 2 minutes nexus3.onap.org:10001/prom/prometheus:latest prometheus Up 2 minutes nexus3.onap.org:10001/onap/policy-models-simulator:4.2.1-SNAPSHOT simulator Up 2 minutes CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 48b5d3051024 policy-apex-pdp 0.90% 224MiB / 31.41GiB 0.70% 49.5kB / 65.1kB 0B / 0B 51 511b9dc125d4 policy-pap 1.90% 553.6MiB / 31.41GiB 1.72% 130kB / 181kB 0B / 139MB 69 7a71271e2043 policy-api 0.10% 416.1MiB / 31.41GiB 1.29% 1.15MB / 1.02MB 0B / 0B 59 8ef7e961aa00 kafka 3.56% 398.2MiB / 31.41GiB 1.24% 202kB / 182kB 0B / 586kB 83 c7b252008075 grafana 0.22% 105.8MiB / 31.41GiB 0.33% 19.2MB / 237kB 0B / 30.3MB 19 d50be649805a zookeeper 0.08% 92.95MiB / 31.41GiB 0.29% 53.3kB / 45.7kB 0B / 446kB 62 69c13d481a58 postgres 0.01% 85.78MiB / 31.41GiB 0.27% 1.67MB / 1.73MB 0B / 159MB 26 c2cc3f2da7c1 prometheus 0.00% 20.4MiB / 31.41GiB 0.06% 98.8kB / 5.31kB 229kB / 0B 13 7c113133ead7 simulator 0.06% 120MiB / 31.41GiB 0.37% 1.38kB / 0B 0B / 0B 64 Container policy-csit Creating Container policy-csit Created Attaching to policy-csit policy-csit | Invoking the robot tests from: pap-test.robot pap-slas.robot policy-csit | Run Robot test policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates policy-csit | -v POLICY_API_IP:policy-api:6969 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 policy-csit | -v APEX_IP:policy-apex-pdp:6969 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 policy-csit | -v KAFKA_IP:kafka:9092 policy-csit | -v PROMETHEUS_IP:prometheus:9090 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 policy-csit | -v POLICY_OPA_IP:policy-opa-pdp:8282 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 policy-csit | -v TEMP_FOLDER:/tmp/distribution policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 policy-csit | -v TEST_ENV:docker policy-csit | -v JAEGER_IP:jaeger:16686 policy-csit | Starting Robot test suites ... policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas.Pap-Test policy-csit | ============================================================================== policy-csit | LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | LoadNodeTemplates :: Create node templates in database using speci... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Healthcheck :: Verify policy pap health check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Consolidated Healthcheck :: Verify policy consolidated health check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Metrics :: Verify policy pap is exporting prometheus metrics | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | DeployPdpGroups :: Deploy policies in PdpGroups | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Pap-Test & Pap-Slas.Pap-Test | PASS | policy-csit | 22 tests, 22 passed, 0 failed policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas.Pap-Slas policy-csit | ============================================================================== policy-csit | WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Pap-Test & Pap-Slas.Pap-Slas | PASS | policy-csit | 8 tests, 8 passed, 0 failed policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas | PASS | policy-csit | 30 tests, 30 passed, 0 failed policy-csit | ============================================================================== policy-csit | Output: /tmp/results/output.xml policy-csit | Log: /tmp/results/log.html policy-csit | Report: /tmp/results/report.html policy-csit | RESULT: 0 policy-csit exited with code 0 IMAGE NAMES STATUS nexus3.onap.org:10001/onap/policy-apex-pdp:4.2.1-SNAPSHOT policy-apex-pdp Up 3 minutes nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up 3 minutes nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up 3 minutes nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up 3 minutes nexus3.onap.org:10001/grafana/grafana:latest grafana Up 3 minutes nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up 3 minutes nexus3.onap.org:10001/library/postgres:16.4 postgres Up 3 minutes nexus3.onap.org:10001/prom/prometheus:latest prometheus Up 3 minutes nexus3.onap.org:10001/onap/policy-models-simulator:4.2.1-SNAPSHOT simulator Up 3 minutes Shut down started! Collecting logs from docker compose containers... grafana | logger=settings t=2025-06-13T14:57:03.890757519Z level=info msg="Starting Grafana" version=12.0.1 commit=80658a73c5355e3ed318e5e021c0866285153b57 branch=HEAD compiled=2025-06-13T14:57:03Z grafana | logger=settings t=2025-06-13T14:57:03.891092683Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini grafana | logger=settings t=2025-06-13T14:57:03.891104263Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini grafana | logger=settings t=2025-06-13T14:57:03.891108464Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" grafana | logger=settings t=2025-06-13T14:57:03.891112774Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" grafana | logger=settings t=2025-06-13T14:57:03.891116564Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" grafana | logger=settings t=2025-06-13T14:57:03.891120814Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" grafana | logger=settings t=2025-06-13T14:57:03.891124524Z level=info msg="Config overridden from command line" arg="default.log.mode=console" grafana | logger=settings t=2025-06-13T14:57:03.891128904Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" grafana | logger=settings t=2025-06-13T14:57:03.891133594Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" grafana | logger=settings t=2025-06-13T14:57:03.891137524Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" grafana | logger=settings t=2025-06-13T14:57:03.891141234Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" grafana | logger=settings t=2025-06-13T14:57:03.891145234Z level=info msg=Target target=[all] grafana | logger=settings t=2025-06-13T14:57:03.891152144Z level=info msg="Path Home" path=/usr/share/grafana grafana | logger=settings t=2025-06-13T14:57:03.891156464Z level=info msg="Path Data" path=/var/lib/grafana grafana | logger=settings t=2025-06-13T14:57:03.891160644Z level=info msg="Path Logs" path=/var/log/grafana grafana | logger=settings t=2025-06-13T14:57:03.891164544Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins grafana | logger=settings t=2025-06-13T14:57:03.891168664Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning grafana | logger=settings t=2025-06-13T14:57:03.891173394Z level=info msg="App mode production" grafana | logger=featuremgmt t=2025-06-13T14:57:03.891625Z level=info msg=FeatureToggles alertingRuleVersionHistoryRestore=true reportingUseRawTimeRange=true dashboardSceneForViewers=true logsInfiniteScrolling=true azureMonitorPrometheusExemplars=true tlsMemcached=true newDashboardSharingComponent=true ssoSettingsSAML=true alertingRulePermanentlyDelete=true dashboardSceneSolo=true prometheusUsesCombobox=true alertingNotificationsStepMode=true lokiQueryHints=true lokiLabelNamesQueryApi=true unifiedRequestLog=true awsAsyncQueryCaching=true lokiQuerySplitting=true azureMonitorEnableUserAuth=true grafanaconThemes=true externalCorePlugins=true alertingUIOptimizeReducer=true unifiedStorageSearchPermissionFiltering=true annotationPermissionUpdate=true recordedQueriesMulti=true logsContextDatasourceUi=true preinstallAutoUpdate=true onPremToCloudMigrations=true cloudWatchCrossAccountQuerying=true pluginsDetailsRightPanel=true kubernetesClientDashboardsFolders=true angularDeprecationUI=true newPDFRendering=true alertingInsights=true publicDashboardsScene=true newFiltersUI=true influxdbBackendMigration=true correlations=true addFieldFromCalculationStatFunctions=true logsPanelControls=true lokiStructuredMetadata=true logRowsPopoverMenu=true alertingQueryAndExpressionsStepMode=true groupToNestedTableTransformation=true formatString=true panelMonitoring=true dataplaneFrontendFallback=true pinNavItems=true ssoSettingsApi=true alertingRuleRecoverDeleted=true useSessionStorageForRedirection=true cloudWatchRoundUpEndTime=true failWrongDSUID=true prometheusAzureOverrideAudience=true alertingApiServer=true alertingSimplifiedRouting=true promQLScope=true dashgpt=true dashboardScene=true recoveryThreshold=true cloudWatchNewLabelParsing=true transformationsRedesign=true nestedFolders=true alertRuleRestore=true kubernetesPlaylists=true logsExploreTableVisualisation=true grafana | logger=sqlstore t=2025-06-13T14:57:03.891685591Z level=info msg="Connecting to DB" dbtype=sqlite3 grafana | logger=sqlstore t=2025-06-13T14:57:03.891704121Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db grafana | logger=migrator t=2025-06-13T14:57:03.893356761Z level=info msg="Locking database" grafana | logger=migrator t=2025-06-13T14:57:03.893371211Z level=info msg="Starting DB migrations" grafana | logger=migrator t=2025-06-13T14:57:03.89403924Z level=info msg="Executing migration" id="create migration_log table" grafana | logger=migrator t=2025-06-13T14:57:03.89486435Z level=info msg="Migration successfully executed" id="create migration_log table" duration=824.78µs grafana | logger=migrator t=2025-06-13T14:57:03.903348813Z level=info msg="Executing migration" id="create user table" grafana | logger=migrator t=2025-06-13T14:57:03.90387615Z level=info msg="Migration successfully executed" id="create user table" duration=526.987µs grafana | logger=migrator t=2025-06-13T14:57:03.907509784Z level=info msg="Executing migration" id="add unique index user.login" grafana | logger=migrator t=2025-06-13T14:57:03.908586818Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=1.076453ms grafana | logger=migrator t=2025-06-13T14:57:03.912116051Z level=info msg="Executing migration" id="add unique index user.email" grafana | logger=migrator t=2025-06-13T14:57:03.913194054Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=1.077734ms grafana | logger=migrator t=2025-06-13T14:57:03.918961825Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" grafana | logger=migrator t=2025-06-13T14:57:03.919622573Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=661.448µs grafana | logger=migrator t=2025-06-13T14:57:03.922828142Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" grafana | logger=migrator t=2025-06-13T14:57:03.923794713Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=965.091µs grafana | logger=migrator t=2025-06-13T14:57:03.927013153Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" grafana | logger=migrator t=2025-06-13T14:57:03.930771719Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=3.757256ms grafana | logger=migrator t=2025-06-13T14:57:03.937152887Z level=info msg="Executing migration" id="create user table v2" grafana | logger=migrator t=2025-06-13T14:57:03.937951947Z level=info msg="Migration successfully executed" id="create user table v2" duration=795.48µs grafana | logger=migrator t=2025-06-13T14:57:03.940890763Z level=info msg="Executing migration" id="create index UQE_user_login - v2" grafana | logger=migrator t=2025-06-13T14:57:03.941975286Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=1.083963ms grafana | logger=migrator t=2025-06-13T14:57:03.944967963Z level=info msg="Executing migration" id="create index UQE_user_email - v2" grafana | logger=migrator t=2025-06-13T14:57:03.946061036Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=1.092123ms grafana | logger=migrator t=2025-06-13T14:57:03.952289222Z level=info msg="Executing migration" id="copy data_source v1 to v2" grafana | logger=migrator t=2025-06-13T14:57:03.952630996Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=341.544µs grafana | logger=migrator t=2025-06-13T14:57:03.955592432Z level=info msg="Executing migration" id="Drop old table user_v1" grafana | logger=migrator t=2025-06-13T14:57:03.956141389Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=548.397µs grafana | logger=migrator t=2025-06-13T14:57:03.962996513Z level=info msg="Executing migration" id="Add column help_flags1 to user table" grafana | logger=migrator t=2025-06-13T14:57:03.96439251Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.397757ms grafana | logger=migrator t=2025-06-13T14:57:03.967518208Z level=info msg="Executing migration" id="Update user table charset" grafana | logger=migrator t=2025-06-13T14:57:03.967560589Z level=info msg="Migration successfully executed" id="Update user table charset" duration=43.261µs grafana | logger=migrator t=2025-06-13T14:57:03.971118483Z level=info msg="Executing migration" id="Add last_seen_at column to user" grafana | logger=migrator t=2025-06-13T14:57:03.97256666Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.439907ms grafana | logger=migrator t=2025-06-13T14:57:03.977052285Z level=info msg="Executing migration" id="Add missing user data" grafana | logger=migrator t=2025-06-13T14:57:03.977276058Z level=info msg="Migration successfully executed" id="Add missing user data" duration=223.243µs grafana | logger=migrator t=2025-06-13T14:57:03.982059206Z level=info msg="Executing migration" id="Add is_disabled column to user" grafana | logger=migrator t=2025-06-13T14:57:03.983818288Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.758982ms grafana | logger=migrator t=2025-06-13T14:57:03.986925636Z level=info msg="Executing migration" id="Add index user.login/user.email" grafana | logger=migrator t=2025-06-13T14:57:03.98801916Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=1.092994ms grafana | logger=migrator t=2025-06-13T14:57:03.99131064Z level=info msg="Executing migration" id="Add is_service_account column to user" grafana | logger=migrator t=2025-06-13T14:57:03.993033521Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.719661ms grafana | logger=migrator t=2025-06-13T14:57:04.003092514Z level=info msg="Executing migration" id="Update is_service_account column to nullable" grafana | logger=migrator t=2025-06-13T14:57:04.015386362Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=12.293008ms grafana | logger=migrator t=2025-06-13T14:57:04.019775954Z level=info msg="Executing migration" id="Add uid column to user" grafana | logger=migrator t=2025-06-13T14:57:04.021208431Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=1.429707ms grafana | logger=migrator t=2025-06-13T14:57:04.024812014Z level=info msg="Executing migration" id="Update uid column values for users" grafana | logger=migrator t=2025-06-13T14:57:04.025026666Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=214.912µs grafana | logger=migrator t=2025-06-13T14:57:04.02785686Z level=info msg="Executing migration" id="Add unique index user_uid" grafana | logger=migrator t=2025-06-13T14:57:04.028827411Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=972.541µs grafana | logger=migrator t=2025-06-13T14:57:04.033610828Z level=info msg="Executing migration" id="Add is_provisioned column to user" grafana | logger=migrator t=2025-06-13T14:57:04.034837073Z level=info msg="Migration successfully executed" id="Add is_provisioned column to user" duration=1.226725ms grafana | logger=migrator t=2025-06-13T14:57:04.040626872Z level=info msg="Executing migration" id="update login field with orgid to allow for multiple service accounts with same name across orgs" grafana | logger=migrator t=2025-06-13T14:57:04.040956825Z level=info msg="Migration successfully executed" id="update login field with orgid to allow for multiple service accounts with same name across orgs" duration=329.883µs grafana | logger=migrator t=2025-06-13T14:57:04.044358766Z level=info msg="Executing migration" id="update service accounts login field orgid to appear only once" grafana | logger=migrator t=2025-06-13T14:57:04.044908672Z level=info msg="Migration successfully executed" id="update service accounts login field orgid to appear only once" duration=549.616µs grafana | logger=migrator t=2025-06-13T14:57:04.048307933Z level=info msg="Executing migration" id="update login and email fields to lowercase" grafana | logger=migrator t=2025-06-13T14:57:04.048987231Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase" duration=678.988µs grafana | logger=migrator t=2025-06-13T14:57:04.052631934Z level=info msg="Executing migration" id="update login and email fields to lowercase2" grafana | logger=migrator t=2025-06-13T14:57:04.053167471Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase2" duration=535.346µs grafana | logger=migrator t=2025-06-13T14:57:04.059080351Z level=info msg="Executing migration" id="create temp user table v1-7" grafana | logger=migrator t=2025-06-13T14:57:04.059902171Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=821.33µs grafana | logger=migrator t=2025-06-13T14:57:04.063050068Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" grafana | logger=migrator t=2025-06-13T14:57:04.063765517Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=718.969µs grafana | logger=migrator t=2025-06-13T14:57:04.067143876Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" grafana | logger=migrator t=2025-06-13T14:57:04.0682635Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=1.118784ms grafana | logger=migrator t=2025-06-13T14:57:04.074275531Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" grafana | logger=migrator t=2025-06-13T14:57:04.075420705Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=1.140704ms grafana | logger=migrator t=2025-06-13T14:57:04.078816965Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" grafana | logger=migrator t=2025-06-13T14:57:04.079945219Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=1.128284ms grafana | logger=migrator t=2025-06-13T14:57:04.083620062Z level=info msg="Executing migration" id="Update temp_user table charset" grafana | logger=migrator t=2025-06-13T14:57:04.083660853Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=42.141µs grafana | logger=migrator t=2025-06-13T14:57:04.087141884Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" grafana | logger=migrator t=2025-06-13T14:57:04.088168066Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=1.029192ms grafana | logger=migrator t=2025-06-13T14:57:04.093920335Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" grafana | logger=migrator t=2025-06-13T14:57:04.094916347Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=996.111µs grafana | logger=migrator t=2025-06-13T14:57:04.098335177Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" grafana | logger=migrator t=2025-06-13T14:57:04.098955535Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=620.008µs grafana | logger=migrator t=2025-06-13T14:57:04.105404071Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" grafana | logger=migrator t=2025-06-13T14:57:04.106386243Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=982.002µs grafana | logger=migrator t=2025-06-13T14:57:04.121585323Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-13T14:57:04.126217448Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=4.632315ms grafana | logger=migrator t=2025-06-13T14:57:04.12969998Z level=info msg="Executing migration" id="create temp_user v2" grafana | logger=migrator t=2025-06-13T14:57:04.13058034Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=879.7µs grafana | logger=migrator t=2025-06-13T14:57:04.133629107Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" grafana | logger=migrator t=2025-06-13T14:57:04.134400996Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=771.449µs grafana | logger=migrator t=2025-06-13T14:57:04.13983965Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" grafana | logger=migrator t=2025-06-13T14:57:04.14063519Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=795.49µs grafana | logger=migrator t=2025-06-13T14:57:04.143865468Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" grafana | logger=migrator t=2025-06-13T14:57:04.144998382Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=1.128974ms grafana | logger=migrator t=2025-06-13T14:57:04.148614145Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" grafana | logger=migrator t=2025-06-13T14:57:04.149792659Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=1.178024ms grafana | logger=migrator t=2025-06-13T14:57:04.15577589Z level=info msg="Executing migration" id="copy temp_user v1 to v2" grafana | logger=migrator t=2025-06-13T14:57:04.156745101Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=968.451µs grafana | logger=migrator t=2025-06-13T14:57:04.16085821Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" grafana | logger=migrator t=2025-06-13T14:57:04.16169819Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=838.83µs grafana | logger=migrator t=2025-06-13T14:57:04.165242672Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" grafana | logger=migrator t=2025-06-13T14:57:04.165847609Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=606.067µs grafana | logger=migrator t=2025-06-13T14:57:04.171484766Z level=info msg="Executing migration" id="create star table" grafana | logger=migrator t=2025-06-13T14:57:04.172131244Z level=info msg="Migration successfully executed" id="create star table" duration=645.968µs grafana | logger=migrator t=2025-06-13T14:57:04.175331422Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" grafana | logger=migrator t=2025-06-13T14:57:04.176093931Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=762.379µs grafana | logger=migrator t=2025-06-13T14:57:04.179979737Z level=info msg="Executing migration" id="Add column dashboard_uid in star" grafana | logger=migrator t=2025-06-13T14:57:04.182249884Z level=info msg="Migration successfully executed" id="Add column dashboard_uid in star" duration=2.268897ms grafana | logger=migrator t=2025-06-13T14:57:04.185769296Z level=info msg="Executing migration" id="Add column org_id in star" grafana | logger=migrator t=2025-06-13T14:57:04.187253384Z level=info msg="Migration successfully executed" id="Add column org_id in star" duration=1.484218ms grafana | logger=migrator t=2025-06-13T14:57:04.193286155Z level=info msg="Executing migration" id="Add column updated in star" grafana | logger=migrator t=2025-06-13T14:57:04.194710302Z level=info msg="Migration successfully executed" id="Add column updated in star" duration=1.423507ms grafana | logger=migrator t=2025-06-13T14:57:04.19789738Z level=info msg="Executing migration" id="add index in star table on dashboard_uid, org_id and user_id columns" grafana | logger=migrator t=2025-06-13T14:57:04.198911712Z level=info msg="Migration successfully executed" id="add index in star table on dashboard_uid, org_id and user_id columns" duration=1.013282ms grafana | logger=migrator t=2025-06-13T14:57:04.20210937Z level=info msg="Executing migration" id="create org table v1" grafana | logger=migrator t=2025-06-13T14:57:04.20295059Z level=info msg="Migration successfully executed" id="create org table v1" duration=840.37µs grafana | logger=migrator t=2025-06-13T14:57:04.206231839Z level=info msg="Executing migration" id="create index UQE_org_name - v1" grafana | logger=migrator t=2025-06-13T14:57:04.207050739Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=812.04µs grafana | logger=migrator t=2025-06-13T14:57:04.212491024Z level=info msg="Executing migration" id="create org_user table v1" grafana | logger=migrator t=2025-06-13T14:57:04.213463935Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=970.341µs grafana | logger=migrator t=2025-06-13T14:57:04.216540182Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" grafana | logger=migrator t=2025-06-13T14:57:04.217743626Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=1.202904ms grafana | logger=migrator t=2025-06-13T14:57:04.220986605Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" grafana | logger=migrator t=2025-06-13T14:57:04.222160219Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=1.176464ms grafana | logger=migrator t=2025-06-13T14:57:04.225350436Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" grafana | logger=migrator t=2025-06-13T14:57:04.226171386Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=820.19µs grafana | logger=migrator t=2025-06-13T14:57:04.240460606Z level=info msg="Executing migration" id="Update org table charset" grafana | logger=migrator t=2025-06-13T14:57:04.240500146Z level=info msg="Migration successfully executed" id="Update org table charset" duration=38.93µs grafana | logger=migrator t=2025-06-13T14:57:04.243758165Z level=info msg="Executing migration" id="Update org_user table charset" grafana | logger=migrator t=2025-06-13T14:57:04.243794016Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=37.181µs grafana | logger=migrator t=2025-06-13T14:57:04.24754139Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" grafana | logger=migrator t=2025-06-13T14:57:04.247757983Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=215.393µs grafana | logger=migrator t=2025-06-13T14:57:04.250806119Z level=info msg="Executing migration" id="create dashboard table" grafana | logger=migrator t=2025-06-13T14:57:04.251596628Z level=info msg="Migration successfully executed" id="create dashboard table" duration=789.789µs grafana | logger=migrator t=2025-06-13T14:57:04.257166154Z level=info msg="Executing migration" id="add index dashboard.account_id" grafana | logger=migrator t=2025-06-13T14:57:04.25848323Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=1.316526ms grafana | logger=migrator t=2025-06-13T14:57:04.262090283Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" grafana | logger=migrator t=2025-06-13T14:57:04.263397758Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=1.307005ms grafana | logger=migrator t=2025-06-13T14:57:04.266782419Z level=info msg="Executing migration" id="create dashboard_tag table" grafana | logger=migrator t=2025-06-13T14:57:04.267475687Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=692.128µs grafana | logger=migrator t=2025-06-13T14:57:04.27359845Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" grafana | logger=migrator t=2025-06-13T14:57:04.274974956Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=1.375906ms grafana | logger=migrator t=2025-06-13T14:57:04.278493388Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" grafana | logger=migrator t=2025-06-13T14:57:04.279701332Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=1.207704ms grafana | logger=migrator t=2025-06-13T14:57:04.282718248Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" grafana | logger=migrator t=2025-06-13T14:57:04.287721398Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=5.0019ms grafana | logger=migrator t=2025-06-13T14:57:04.293687118Z level=info msg="Executing migration" id="create dashboard v2" grafana | logger=migrator t=2025-06-13T14:57:04.294471648Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=783.48µs grafana | logger=migrator t=2025-06-13T14:57:04.297615055Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" grafana | logger=migrator t=2025-06-13T14:57:04.298649168Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=1.036313ms grafana | logger=migrator t=2025-06-13T14:57:04.301818815Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" grafana | logger=migrator t=2025-06-13T14:57:04.30308372Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=1.264575ms grafana | logger=migrator t=2025-06-13T14:57:04.309321384Z level=info msg="Executing migration" id="copy dashboard v1 to v2" grafana | logger=migrator t=2025-06-13T14:57:04.309699039Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=376.365µs grafana | logger=migrator t=2025-06-13T14:57:04.312558843Z level=info msg="Executing migration" id="drop table dashboard_v1" grafana | logger=migrator t=2025-06-13T14:57:04.313543334Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=983.591µs grafana | logger=migrator t=2025-06-13T14:57:04.316897834Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" grafana | logger=migrator t=2025-06-13T14:57:04.316915384Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=18.21µs grafana | logger=migrator t=2025-06-13T14:57:04.322705173Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" grafana | logger=migrator t=2025-06-13T14:57:04.324629156Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=1.903213ms grafana | logger=migrator t=2025-06-13T14:57:04.328011876Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" grafana | logger=migrator t=2025-06-13T14:57:04.329910329Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.897203ms grafana | logger=migrator t=2025-06-13T14:57:04.332833734Z level=info msg="Executing migration" id="Add column gnetId in dashboard" grafana | logger=migrator t=2025-06-13T14:57:04.334747336Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.912292ms grafana | logger=migrator t=2025-06-13T14:57:04.338051196Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" grafana | logger=migrator t=2025-06-13T14:57:04.338858625Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=806.39µs grafana | logger=migrator t=2025-06-13T14:57:04.344175458Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" grafana | logger=migrator t=2025-06-13T14:57:04.346086141Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.910133ms grafana | logger=migrator t=2025-06-13T14:57:04.358385407Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" grafana | logger=migrator t=2025-06-13T14:57:04.359755974Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=1.369687ms grafana | logger=migrator t=2025-06-13T14:57:04.365890356Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" grafana | logger=migrator t=2025-06-13T14:57:04.367153471Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=1.258955ms grafana | logger=migrator t=2025-06-13T14:57:04.37124839Z level=info msg="Executing migration" id="Update dashboard table charset" grafana | logger=migrator t=2025-06-13T14:57:04.371316691Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=68.501µs grafana | logger=migrator t=2025-06-13T14:57:04.374810032Z level=info msg="Executing migration" id="Update dashboard_tag table charset" grafana | logger=migrator t=2025-06-13T14:57:04.374850373Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=40.981µs grafana | logger=migrator t=2025-06-13T14:57:04.380229147Z level=info msg="Executing migration" id="Add column folder_id in dashboard" grafana | logger=migrator t=2025-06-13T14:57:04.38217509Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=1.945103ms grafana | logger=migrator t=2025-06-13T14:57:04.387867207Z level=info msg="Executing migration" id="Add column isFolder in dashboard" grafana | logger=migrator t=2025-06-13T14:57:04.390959974Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=3.091557ms grafana | logger=migrator t=2025-06-13T14:57:04.39478883Z level=info msg="Executing migration" id="Add column has_acl in dashboard" grafana | logger=migrator t=2025-06-13T14:57:04.396839154Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=2.050624ms grafana | logger=migrator t=2025-06-13T14:57:04.400061162Z level=info msg="Executing migration" id="Add column uid in dashboard" grafana | logger=migrator t=2025-06-13T14:57:04.402175797Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=2.114125ms grafana | logger=migrator t=2025-06-13T14:57:04.408161579Z level=info msg="Executing migration" id="Update uid column values in dashboard" grafana | logger=migrator t=2025-06-13T14:57:04.408437342Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=275.173µs grafana | logger=migrator t=2025-06-13T14:57:04.411424648Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" grafana | logger=migrator t=2025-06-13T14:57:04.412238357Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=813.269µs grafana | logger=migrator t=2025-06-13T14:57:04.415309683Z level=info msg="Executing migration" id="Remove unique index org_id_slug" grafana | logger=migrator t=2025-06-13T14:57:04.416525908Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=1.216025ms grafana | logger=migrator t=2025-06-13T14:57:04.421479597Z level=info msg="Executing migration" id="Update dashboard title length" grafana | logger=migrator t=2025-06-13T14:57:04.421516707Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=38.41µs grafana | logger=migrator t=2025-06-13T14:57:04.425286142Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" grafana | logger=migrator t=2025-06-13T14:57:04.426598368Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=1.311676ms grafana | logger=migrator t=2025-06-13T14:57:04.43012032Z level=info msg="Executing migration" id="create dashboard_provisioning" grafana | logger=migrator t=2025-06-13T14:57:04.430951689Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=831.619µs grafana | logger=migrator t=2025-06-13T14:57:04.435635725Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-13T14:57:04.440914588Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=5.278053ms grafana | logger=migrator t=2025-06-13T14:57:04.444826794Z level=info msg="Executing migration" id="create dashboard_provisioning v2" grafana | logger=migrator t=2025-06-13T14:57:04.445586833Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=762.659µs grafana | logger=migrator t=2025-06-13T14:57:04.449262187Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" grafana | logger=migrator t=2025-06-13T14:57:04.450261999Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=998.732µs grafana | logger=migrator t=2025-06-13T14:57:04.461159368Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" grafana | logger=migrator t=2025-06-13T14:57:04.462417503Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=1.259455ms grafana | logger=migrator t=2025-06-13T14:57:04.476947346Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" grafana | logger=migrator t=2025-06-13T14:57:04.477811226Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=871.67µs grafana | logger=migrator t=2025-06-13T14:57:04.484417275Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" grafana | logger=migrator t=2025-06-13T14:57:04.485449787Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=1.032202ms grafana | logger=migrator t=2025-06-13T14:57:04.48911545Z level=info msg="Executing migration" id="Add check_sum column" grafana | logger=migrator t=2025-06-13T14:57:04.492682233Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=3.565923ms grafana | logger=migrator t=2025-06-13T14:57:04.495907431Z level=info msg="Executing migration" id="Add index for dashboard_title" grafana | logger=migrator t=2025-06-13T14:57:04.49665827Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=750.599µs grafana | logger=migrator t=2025-06-13T14:57:04.502496589Z level=info msg="Executing migration" id="delete tags for deleted dashboards" grafana | logger=migrator t=2025-06-13T14:57:04.502714962Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=218.783µs grafana | logger=migrator t=2025-06-13T14:57:04.505676257Z level=info msg="Executing migration" id="delete stars for deleted dashboards" grafana | logger=migrator t=2025-06-13T14:57:04.50586465Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=188.513µs grafana | logger=migrator t=2025-06-13T14:57:04.508704023Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" grafana | logger=migrator t=2025-06-13T14:57:04.509537233Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=832.85µs grafana | logger=migrator t=2025-06-13T14:57:04.513715113Z level=info msg="Executing migration" id="Add isPublic for dashboard" grafana | logger=migrator t=2025-06-13T14:57:04.51604311Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.327477ms grafana | logger=migrator t=2025-06-13T14:57:04.518903315Z level=info msg="Executing migration" id="Add deleted for dashboard" grafana | logger=migrator t=2025-06-13T14:57:04.521166472Z level=info msg="Migration successfully executed" id="Add deleted for dashboard" duration=2.262437ms grafana | logger=migrator t=2025-06-13T14:57:04.523840513Z level=info msg="Executing migration" id="Add index for deleted" grafana | logger=migrator t=2025-06-13T14:57:04.524676583Z level=info msg="Migration successfully executed" id="Add index for deleted" duration=835.74µs grafana | logger=migrator t=2025-06-13T14:57:04.529818654Z level=info msg="Executing migration" id="Add column dashboard_uid in dashboard_tag" grafana | logger=migrator t=2025-06-13T14:57:04.532140802Z level=info msg="Migration successfully executed" id="Add column dashboard_uid in dashboard_tag" duration=2.321408ms grafana | logger=migrator t=2025-06-13T14:57:04.534906805Z level=info msg="Executing migration" id="Add column org_id in dashboard_tag" grafana | logger=migrator t=2025-06-13T14:57:04.537218822Z level=info msg="Migration successfully executed" id="Add column org_id in dashboard_tag" duration=2.311767ms grafana | logger=migrator t=2025-06-13T14:57:04.540348079Z level=info msg="Executing migration" id="Add missing dashboard_uid and org_id to dashboard_tag" grafana | logger=migrator t=2025-06-13T14:57:04.540970717Z level=info msg="Migration successfully executed" id="Add missing dashboard_uid and org_id to dashboard_tag" duration=622.178µs grafana | logger=migrator t=2025-06-13T14:57:04.546405461Z level=info msg="Executing migration" id="Add apiVersion for dashboard" grafana | logger=migrator t=2025-06-13T14:57:04.549726031Z level=info msg="Migration successfully executed" id="Add apiVersion for dashboard" duration=3.3198ms grafana | logger=migrator t=2025-06-13T14:57:04.55300542Z level=info msg="Executing migration" id="Add index for dashboard_uid on dashboard_tag table" grafana | logger=migrator t=2025-06-13T14:57:04.554138633Z level=info msg="Migration successfully executed" id="Add index for dashboard_uid on dashboard_tag table" duration=1.132333ms grafana | logger=migrator t=2025-06-13T14:57:04.557146149Z level=info msg="Executing migration" id="Add missing dashboard_uid and org_id to star" grafana | logger=migrator t=2025-06-13T14:57:04.557689746Z level=info msg="Migration successfully executed" id="Add missing dashboard_uid and org_id to star" duration=536.286µs grafana | logger=migrator t=2025-06-13T14:57:04.561015335Z level=info msg="Executing migration" id="create data_source table" grafana | logger=migrator t=2025-06-13T14:57:04.561993226Z level=info msg="Migration successfully executed" id="create data_source table" duration=977.211µs grafana | logger=migrator t=2025-06-13T14:57:04.566762963Z level=info msg="Executing migration" id="add index data_source.account_id" grafana | logger=migrator t=2025-06-13T14:57:04.567676644Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=912.631µs grafana | logger=migrator t=2025-06-13T14:57:04.571155356Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" grafana | logger=migrator t=2025-06-13T14:57:04.571947525Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=791.829µs grafana | logger=migrator t=2025-06-13T14:57:04.575386146Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" grafana | logger=migrator t=2025-06-13T14:57:04.576513429Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=1.127324ms grafana | logger=migrator t=2025-06-13T14:57:04.581377327Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" grafana | logger=migrator t=2025-06-13T14:57:04.582133516Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=754.739µs grafana | logger=migrator t=2025-06-13T14:57:04.596698489Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" grafana | logger=migrator t=2025-06-13T14:57:04.604471771Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=7.813133ms grafana | logger=migrator t=2025-06-13T14:57:04.609173297Z level=info msg="Executing migration" id="create data_source table v2" grafana | logger=migrator t=2025-06-13T14:57:04.610079558Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=906.141µs grafana | logger=migrator t=2025-06-13T14:57:04.613183755Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" grafana | logger=migrator t=2025-06-13T14:57:04.614019915Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=835.73µs grafana | logger=migrator t=2025-06-13T14:57:04.616828398Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" grafana | logger=migrator t=2025-06-13T14:57:04.617665218Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=835.97µs grafana | logger=migrator t=2025-06-13T14:57:04.622358354Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" grafana | logger=migrator t=2025-06-13T14:57:04.622985462Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=626.747µs grafana | logger=migrator t=2025-06-13T14:57:04.626003247Z level=info msg="Executing migration" id="Add column with_credentials" grafana | logger=migrator t=2025-06-13T14:57:04.628480507Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=2.47665ms grafana | logger=migrator t=2025-06-13T14:57:04.631475442Z level=info msg="Executing migration" id="Add secure json data column" grafana | logger=migrator t=2025-06-13T14:57:04.634089213Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.609951ms grafana | logger=migrator t=2025-06-13T14:57:04.638630477Z level=info msg="Executing migration" id="Update data_source table charset" grafana | logger=migrator t=2025-06-13T14:57:04.638661948Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=31.72µs grafana | logger=migrator t=2025-06-13T14:57:04.641487361Z level=info msg="Executing migration" id="Update initial version to 1" grafana | logger=migrator t=2025-06-13T14:57:04.641748814Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=260.783µs grafana | logger=migrator t=2025-06-13T14:57:04.64472449Z level=info msg="Executing migration" id="Add read_only data column" grafana | logger=migrator t=2025-06-13T14:57:04.64728415Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=2.55951ms grafana | logger=migrator t=2025-06-13T14:57:04.652037936Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" grafana | logger=migrator t=2025-06-13T14:57:04.65231964Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=281.124µs grafana | logger=migrator t=2025-06-13T14:57:04.655311165Z level=info msg="Executing migration" id="Update json_data with nulls" grafana | logger=migrator t=2025-06-13T14:57:04.655537308Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=225.743µs grafana | logger=migrator t=2025-06-13T14:57:04.658787537Z level=info msg="Executing migration" id="Add uid column" grafana | logger=migrator t=2025-06-13T14:57:04.661387278Z level=info msg="Migration successfully executed" id="Add uid column" duration=2.598901ms grafana | logger=migrator t=2025-06-13T14:57:04.668782586Z level=info msg="Executing migration" id="Update uid value" grafana | logger=migrator t=2025-06-13T14:57:04.669039979Z level=info msg="Migration successfully executed" id="Update uid value" duration=256.713µs grafana | logger=migrator t=2025-06-13T14:57:04.673294349Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" grafana | logger=migrator t=2025-06-13T14:57:04.674141919Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=850.57µs grafana | logger=migrator t=2025-06-13T14:57:04.676935622Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" grafana | logger=migrator t=2025-06-13T14:57:04.677818733Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=859.37µs grafana | logger=migrator t=2025-06-13T14:57:04.680268252Z level=info msg="Executing migration" id="Add is_prunable column" grafana | logger=migrator t=2025-06-13T14:57:04.682497759Z level=info msg="Migration successfully executed" id="Add is_prunable column" duration=2.227707ms grafana | logger=migrator t=2025-06-13T14:57:04.685463914Z level=info msg="Executing migration" id="Add api_version column" grafana | logger=migrator t=2025-06-13T14:57:04.688325278Z level=info msg="Migration successfully executed" id="Add api_version column" duration=2.861454ms grafana | logger=migrator t=2025-06-13T14:57:04.693985595Z level=info msg="Executing migration" id="Update secure_json_data column to MediumText" grafana | logger=migrator t=2025-06-13T14:57:04.694006125Z level=info msg="Migration successfully executed" id="Update secure_json_data column to MediumText" duration=21.26µs grafana | logger=migrator t=2025-06-13T14:57:04.69697613Z level=info msg="Executing migration" id="create api_key table" grafana | logger=migrator t=2025-06-13T14:57:04.697888251Z level=info msg="Migration successfully executed" id="create api_key table" duration=911.841µs grafana | logger=migrator t=2025-06-13T14:57:04.700931447Z level=info msg="Executing migration" id="add index api_key.account_id" grafana | logger=migrator t=2025-06-13T14:57:04.701831208Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=899.571µs grafana | logger=migrator t=2025-06-13T14:57:04.713707809Z level=info msg="Executing migration" id="add index api_key.key" grafana | logger=migrator t=2025-06-13T14:57:04.715186937Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=1.480308ms grafana | logger=migrator t=2025-06-13T14:57:04.718747539Z level=info msg="Executing migration" id="add index api_key.account_id_name" grafana | logger=migrator t=2025-06-13T14:57:04.720164276Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=1.416477ms grafana | logger=migrator t=2025-06-13T14:57:04.724862222Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" grafana | logger=migrator t=2025-06-13T14:57:04.725844114Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=983.132µs grafana | logger=migrator t=2025-06-13T14:57:04.728684407Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" grafana | logger=migrator t=2025-06-13T14:57:04.729634549Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=949.762µs grafana | logger=migrator t=2025-06-13T14:57:04.732743366Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" grafana | logger=migrator t=2025-06-13T14:57:04.733668347Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=924.671µs grafana | logger=migrator t=2025-06-13T14:57:04.739429245Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" grafana | logger=migrator t=2025-06-13T14:57:04.751014983Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=11.580097ms grafana | logger=migrator t=2025-06-13T14:57:04.764462942Z level=info msg="Executing migration" id="create api_key table v2" grafana | logger=migrator t=2025-06-13T14:57:04.765313132Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=852.89µs grafana | logger=migrator t=2025-06-13T14:57:04.768830034Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" grafana | logger=migrator t=2025-06-13T14:57:04.769457952Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=625.758µs grafana | logger=migrator t=2025-06-13T14:57:04.771300014Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" grafana | logger=migrator t=2025-06-13T14:57:04.771895331Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=595.227µs grafana | logger=migrator t=2025-06-13T14:57:04.774029236Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" grafana | logger=migrator t=2025-06-13T14:57:04.774632493Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=603.127µs grafana | logger=migrator t=2025-06-13T14:57:04.778182915Z level=info msg="Executing migration" id="copy api_key v1 to v2" grafana | logger=migrator t=2025-06-13T14:57:04.778470379Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=286.934µs grafana | logger=migrator t=2025-06-13T14:57:04.780754186Z level=info msg="Executing migration" id="Drop old table api_key_v1" grafana | logger=migrator t=2025-06-13T14:57:04.781405074Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=652.718µs grafana | logger=migrator t=2025-06-13T14:57:04.785893837Z level=info msg="Executing migration" id="Update api_key table charset" grafana | logger=migrator t=2025-06-13T14:57:04.785955558Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=64.941µs grafana | logger=migrator t=2025-06-13T14:57:04.788805062Z level=info msg="Executing migration" id="Add expires to api_key table" grafana | logger=migrator t=2025-06-13T14:57:04.793150563Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=4.339422ms grafana | logger=migrator t=2025-06-13T14:57:04.796270461Z level=info msg="Executing migration" id="Add service account foreign key" grafana | logger=migrator t=2025-06-13T14:57:04.799078284Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.806564ms grafana | logger=migrator t=2025-06-13T14:57:04.803859951Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" grafana | logger=migrator t=2025-06-13T14:57:04.804084503Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=223.383µs grafana | logger=migrator t=2025-06-13T14:57:04.806786865Z level=info msg="Executing migration" id="Add last_used_at to api_key table" grafana | logger=migrator t=2025-06-13T14:57:04.809248405Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=2.461ms grafana | logger=migrator t=2025-06-13T14:57:04.812076868Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" grafana | logger=migrator t=2025-06-13T14:57:04.814831361Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.751153ms grafana | logger=migrator t=2025-06-13T14:57:04.82737328Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" grafana | logger=migrator t=2025-06-13T14:57:04.828162459Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=788.679µs grafana | logger=migrator t=2025-06-13T14:57:04.83154982Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" grafana | logger=migrator t=2025-06-13T14:57:04.832200537Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=649.787µs grafana | logger=migrator t=2025-06-13T14:57:04.834716297Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" grafana | logger=migrator t=2025-06-13T14:57:04.835553327Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=836.62µs grafana | logger=migrator t=2025-06-13T14:57:04.840609437Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" grafana | logger=migrator t=2025-06-13T14:57:04.84170405Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=1.094273ms grafana | logger=migrator t=2025-06-13T14:57:04.844857818Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" grafana | logger=migrator t=2025-06-13T14:57:04.845688408Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=830.58µs grafana | logger=migrator t=2025-06-13T14:57:04.848361679Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" grafana | logger=migrator t=2025-06-13T14:57:04.849186779Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=824.21µs grafana | logger=migrator t=2025-06-13T14:57:04.854228239Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" grafana | logger=migrator t=2025-06-13T14:57:04.854249759Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=21.59µs grafana | logger=migrator t=2025-06-13T14:57:04.857044223Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" grafana | logger=migrator t=2025-06-13T14:57:04.857066833Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=23.93µs grafana | logger=migrator t=2025-06-13T14:57:04.859283919Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" grafana | logger=migrator t=2025-06-13T14:57:04.862102153Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=2.816674ms grafana | logger=migrator t=2025-06-13T14:57:04.868209325Z level=info msg="Executing migration" id="Add encrypted dashboard json column" grafana | logger=migrator t=2025-06-13T14:57:04.87115863Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.950775ms grafana | logger=migrator t=2025-06-13T14:57:04.874468849Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" grafana | logger=migrator t=2025-06-13T14:57:04.87450341Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=36.481µs grafana | logger=migrator t=2025-06-13T14:57:04.877622447Z level=info msg="Executing migration" id="create quota table v1" grafana | logger=migrator t=2025-06-13T14:57:04.878858082Z level=info msg="Migration successfully executed" id="create quota table v1" duration=1.234935ms grafana | logger=migrator t=2025-06-13T14:57:04.881972309Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" grafana | logger=migrator t=2025-06-13T14:57:04.882859879Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=887.19µs grafana | logger=migrator t=2025-06-13T14:57:04.888960362Z level=info msg="Executing migration" id="Update quota table charset" grafana | logger=migrator t=2025-06-13T14:57:04.888986452Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=26.73µs grafana | logger=migrator t=2025-06-13T14:57:04.891458401Z level=info msg="Executing migration" id="create plugin_setting table" grafana | logger=migrator t=2025-06-13T14:57:04.892855108Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=1.391367ms grafana | logger=migrator t=2025-06-13T14:57:04.897017138Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" grafana | logger=migrator t=2025-06-13T14:57:04.898301453Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=1.284685ms grafana | logger=migrator t=2025-06-13T14:57:04.903496894Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" grafana | logger=migrator t=2025-06-13T14:57:04.906811074Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=3.31338ms grafana | logger=migrator t=2025-06-13T14:57:04.909570297Z level=info msg="Executing migration" id="Update plugin_setting table charset" grafana | logger=migrator t=2025-06-13T14:57:04.909592747Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=23.09µs grafana | logger=migrator t=2025-06-13T14:57:04.912546262Z level=info msg="Executing migration" id="update NULL org_id to 1" grafana | logger=migrator t=2025-06-13T14:57:04.913001228Z level=info msg="Migration successfully executed" id="update NULL org_id to 1" duration=454.915µs grafana | logger=migrator t=2025-06-13T14:57:04.91910679Z level=info msg="Executing migration" id="make org_id NOT NULL and DEFAULT VALUE 1" grafana | logger=migrator t=2025-06-13T14:57:04.932815033Z level=info msg="Migration successfully executed" id="make org_id NOT NULL and DEFAULT VALUE 1" duration=13.706943ms grafana | logger=migrator t=2025-06-13T14:57:04.936470306Z level=info msg="Executing migration" id="create session table" grafana | logger=migrator t=2025-06-13T14:57:04.937303736Z level=info msg="Migration successfully executed" id="create session table" duration=834.65µs grafana | logger=migrator t=2025-06-13T14:57:04.94938015Z level=info msg="Executing migration" id="Drop old table playlist table" grafana | logger=migrator t=2025-06-13T14:57:04.949654743Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=275.633µs grafana | logger=migrator t=2025-06-13T14:57:04.95614879Z level=info msg="Executing migration" id="Drop old table playlist_item table" grafana | logger=migrator t=2025-06-13T14:57:04.956412873Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=265.303µs grafana | logger=migrator t=2025-06-13T14:57:04.960074447Z level=info msg="Executing migration" id="create playlist table v2" grafana | logger=migrator t=2025-06-13T14:57:04.96115089Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=1.077893ms grafana | logger=migrator t=2025-06-13T14:57:04.964366128Z level=info msg="Executing migration" id="create playlist item table v2" grafana | logger=migrator t=2025-06-13T14:57:04.965256868Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=891.04µs grafana | logger=migrator t=2025-06-13T14:57:04.968470027Z level=info msg="Executing migration" id="Update playlist table charset" grafana | logger=migrator t=2025-06-13T14:57:04.968503047Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=34.59µs grafana | logger=migrator t=2025-06-13T14:57:04.974606869Z level=info msg="Executing migration" id="Update playlist_item table charset" grafana | logger=migrator t=2025-06-13T14:57:04.97465112Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=46.741µs grafana | logger=migrator t=2025-06-13T14:57:04.977900649Z level=info msg="Executing migration" id="Add playlist column created_at" grafana | logger=migrator t=2025-06-13T14:57:04.983358674Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=5.457986ms grafana | logger=migrator t=2025-06-13T14:57:04.986685923Z level=info msg="Executing migration" id="Add playlist column updated_at" grafana | logger=migrator t=2025-06-13T14:57:04.990035183Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=3.34719ms grafana | logger=migrator t=2025-06-13T14:57:04.996419409Z level=info msg="Executing migration" id="drop preferences table v2" grafana | logger=migrator t=2025-06-13T14:57:04.996605201Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=184.912µs grafana | logger=migrator t=2025-06-13T14:57:04.999468495Z level=info msg="Executing migration" id="drop preferences table v3" grafana | logger=migrator t=2025-06-13T14:57:04.999550296Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=82.011µs grafana | logger=migrator t=2025-06-13T14:57:05.002628112Z level=info msg="Executing migration" id="create preferences table v3" grafana | logger=migrator t=2025-06-13T14:57:05.004066579Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=1.436007ms grafana | logger=migrator t=2025-06-13T14:57:05.007731583Z level=info msg="Executing migration" id="Update preferences table charset" grafana | logger=migrator t=2025-06-13T14:57:05.007766404Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=35.801µs grafana | logger=migrator t=2025-06-13T14:57:05.013437291Z level=info msg="Executing migration" id="Add column team_id in preferences" grafana | logger=migrator t=2025-06-13T14:57:05.019030177Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=5.594886ms grafana | logger=migrator t=2025-06-13T14:57:05.02267619Z level=info msg="Executing migration" id="Update team_id column values in preferences" grafana | logger=migrator t=2025-06-13T14:57:05.022954784Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=278.074µs grafana | logger=migrator t=2025-06-13T14:57:05.026218743Z level=info msg="Executing migration" id="Add column week_start in preferences" grafana | logger=migrator t=2025-06-13T14:57:05.029723744Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=3.495361ms grafana | logger=migrator t=2025-06-13T14:57:05.035497383Z level=info msg="Executing migration" id="Add column preferences.json_data" grafana | logger=migrator t=2025-06-13T14:57:05.038769702Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=3.271299ms grafana | logger=migrator t=2025-06-13T14:57:05.041740537Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" grafana | logger=migrator t=2025-06-13T14:57:05.041764047Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=24.31µs grafana | logger=migrator t=2025-06-13T14:57:05.044702532Z level=info msg="Executing migration" id="Add preferences index org_id" grafana | logger=migrator t=2025-06-13T14:57:05.045622493Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=919.581µs grafana | logger=migrator t=2025-06-13T14:57:05.048796941Z level=info msg="Executing migration" id="Add preferences index user_id" grafana | logger=migrator t=2025-06-13T14:57:05.049779093Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=983.961µs grafana | logger=migrator t=2025-06-13T14:57:05.055694993Z level=info msg="Executing migration" id="create alert table v1" grafana | logger=migrator t=2025-06-13T14:57:05.057385473Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.68944ms grafana | logger=migrator t=2025-06-13T14:57:05.065525049Z level=info msg="Executing migration" id="add index alert org_id & id " grafana | logger=migrator t=2025-06-13T14:57:05.06641604Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=941.631µs grafana | logger=migrator t=2025-06-13T14:57:05.071909825Z level=info msg="Executing migration" id="add index alert state" grafana | logger=migrator t=2025-06-13T14:57:05.073316212Z level=info msg="Migration successfully executed" id="add index alert state" duration=1.404947ms grafana | logger=migrator t=2025-06-13T14:57:05.076691572Z level=info msg="Executing migration" id="add index alert dashboard_id" grafana | logger=migrator t=2025-06-13T14:57:05.078103579Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=1.411707ms grafana | logger=migrator t=2025-06-13T14:57:05.081794843Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" grafana | logger=migrator t=2025-06-13T14:57:05.082545612Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=750.049µs grafana | logger=migrator t=2025-06-13T14:57:05.085665719Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" grafana | logger=migrator t=2025-06-13T14:57:05.086516599Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=850.15µs grafana | logger=migrator t=2025-06-13T14:57:05.09250703Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" grafana | logger=migrator t=2025-06-13T14:57:05.093404421Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=897.221µs grafana | logger=migrator t=2025-06-13T14:57:05.097004413Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" grafana | logger=migrator t=2025-06-13T14:57:05.110552944Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=13.549051ms grafana | logger=migrator t=2025-06-13T14:57:05.113779503Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" grafana | logger=migrator t=2025-06-13T14:57:05.114545552Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=765.199µs grafana | logger=migrator t=2025-06-13T14:57:05.120086298Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" grafana | logger=migrator t=2025-06-13T14:57:05.120993918Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=907.44µs grafana | logger=migrator t=2025-06-13T14:57:05.124074495Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" grafana | logger=migrator t=2025-06-13T14:57:05.124647652Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=573.617µs grafana | logger=migrator t=2025-06-13T14:57:05.128909532Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" grafana | logger=migrator t=2025-06-13T14:57:05.129774763Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=863.751µs grafana | logger=migrator t=2025-06-13T14:57:05.135899195Z level=info msg="Executing migration" id="create alert_notification table v1" grafana | logger=migrator t=2025-06-13T14:57:05.136767206Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=867.801µs grafana | logger=migrator t=2025-06-13T14:57:05.139925273Z level=info msg="Executing migration" id="Add column is_default" grafana | logger=migrator t=2025-06-13T14:57:05.144820231Z level=info msg="Migration successfully executed" id="Add column is_default" duration=4.888228ms grafana | logger=migrator t=2025-06-13T14:57:05.148179881Z level=info msg="Executing migration" id="Add column frequency" grafana | logger=migrator t=2025-06-13T14:57:05.153415623Z level=info msg="Migration successfully executed" id="Add column frequency" duration=5.235462ms grafana | logger=migrator t=2025-06-13T14:57:05.15901773Z level=info msg="Executing migration" id="Add column send_reminder" grafana | logger=migrator t=2025-06-13T14:57:05.162740594Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.719894ms grafana | logger=migrator t=2025-06-13T14:57:05.16573361Z level=info msg="Executing migration" id="Add column disable_resolve_message" grafana | logger=migrator t=2025-06-13T14:57:05.169413113Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=3.676763ms grafana | logger=migrator t=2025-06-13T14:57:05.181469977Z level=info msg="Executing migration" id="add index alert_notification org_id & name" grafana | logger=migrator t=2025-06-13T14:57:05.183021505Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=1.555018ms grafana | logger=migrator t=2025-06-13T14:57:05.189668694Z level=info msg="Executing migration" id="Update alert table charset" grafana | logger=migrator t=2025-06-13T14:57:05.189710764Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=39.62µs grafana | logger=migrator t=2025-06-13T14:57:05.1934985Z level=info msg="Executing migration" id="Update alert_notification table charset" grafana | logger=migrator t=2025-06-13T14:57:05.19353948Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=45.881µs grafana | logger=migrator t=2025-06-13T14:57:05.196731728Z level=info msg="Executing migration" id="create notification_journal table v1" grafana | logger=migrator t=2025-06-13T14:57:05.198271996Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=1.539668ms grafana | logger=migrator t=2025-06-13T14:57:05.20445858Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" grafana | logger=migrator t=2025-06-13T14:57:05.205401831Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=942.791µs grafana | logger=migrator t=2025-06-13T14:57:05.208903843Z level=info msg="Executing migration" id="drop alert_notification_journal" grafana | logger=migrator t=2025-06-13T14:57:05.210191668Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=1.285835ms grafana | logger=migrator t=2025-06-13T14:57:05.214336097Z level=info msg="Executing migration" id="create alert_notification_state table v1" grafana | logger=migrator t=2025-06-13T14:57:05.215686183Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=1.348636ms grafana | logger=migrator t=2025-06-13T14:57:05.220644042Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" grafana | logger=migrator t=2025-06-13T14:57:05.221575553Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=931.291µs grafana | logger=migrator t=2025-06-13T14:57:05.225218766Z level=info msg="Executing migration" id="Add for to alert table" grafana | logger=migrator t=2025-06-13T14:57:05.229116963Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=3.897496ms grafana | logger=migrator t=2025-06-13T14:57:05.233626616Z level=info msg="Executing migration" id="Add column uid in alert_notification" grafana | logger=migrator t=2025-06-13T14:57:05.237513872Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=3.875936ms grafana | logger=migrator t=2025-06-13T14:57:05.241156676Z level=info msg="Executing migration" id="Update uid column values in alert_notification" grafana | logger=migrator t=2025-06-13T14:57:05.241441719Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=280.623µs grafana | logger=migrator t=2025-06-13T14:57:05.24494256Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" grafana | logger=migrator t=2025-06-13T14:57:05.245946812Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=1.003782ms grafana | logger=migrator t=2025-06-13T14:57:05.249669697Z level=info msg="Executing migration" id="Remove unique index org_id_name" grafana | logger=migrator t=2025-06-13T14:57:05.251060273Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=1.390356ms grafana | logger=migrator t=2025-06-13T14:57:05.256952223Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" grafana | logger=migrator t=2025-06-13T14:57:05.263173477Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=6.220194ms grafana | logger=migrator t=2025-06-13T14:57:05.26676754Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" grafana | logger=migrator t=2025-06-13T14:57:05.26678284Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=15.92µs grafana | logger=migrator t=2025-06-13T14:57:05.270026708Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" grafana | logger=migrator t=2025-06-13T14:57:05.270717877Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=690.909µs grafana | logger=migrator t=2025-06-13T14:57:05.275040818Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" grafana | logger=migrator t=2025-06-13T14:57:05.276526656Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=1.485307ms grafana | logger=migrator t=2025-06-13T14:57:05.280063448Z level=info msg="Executing migration" id="Drop old annotation table v4" grafana | logger=migrator t=2025-06-13T14:57:05.280339791Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=274.933µs grafana | logger=migrator t=2025-06-13T14:57:05.284655052Z level=info msg="Executing migration" id="create annotation table v5" grafana | logger=migrator t=2025-06-13T14:57:05.285629534Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=973.852µs grafana | logger=migrator t=2025-06-13T14:57:05.299872763Z level=info msg="Executing migration" id="add index annotation 0 v3" grafana | logger=migrator t=2025-06-13T14:57:05.301467652Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=1.593769ms grafana | logger=migrator t=2025-06-13T14:57:05.30550297Z level=info msg="Executing migration" id="add index annotation 1 v3" grafana | logger=migrator t=2025-06-13T14:57:05.306992368Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=1.488738ms grafana | logger=migrator t=2025-06-13T14:57:05.310833433Z level=info msg="Executing migration" id="add index annotation 2 v3" grafana | logger=migrator t=2025-06-13T14:57:05.311807125Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=972.992µs grafana | logger=migrator t=2025-06-13T14:57:05.316249897Z level=info msg="Executing migration" id="add index annotation 3 v3" grafana | logger=migrator t=2025-06-13T14:57:05.317557833Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.305946ms grafana | logger=migrator t=2025-06-13T14:57:05.321132725Z level=info msg="Executing migration" id="add index annotation 4 v3" grafana | logger=migrator t=2025-06-13T14:57:05.322699724Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.566279ms grafana | logger=migrator t=2025-06-13T14:57:05.326151245Z level=info msg="Executing migration" id="Update annotation table charset" grafana | logger=migrator t=2025-06-13T14:57:05.326175775Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=25.11µs grafana | logger=migrator t=2025-06-13T14:57:05.330477986Z level=info msg="Executing migration" id="Add column region_id to annotation table" grafana | logger=migrator t=2025-06-13T14:57:05.334632646Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=4.15379ms grafana | logger=migrator t=2025-06-13T14:57:05.338149468Z level=info msg="Executing migration" id="Drop category_id index" grafana | logger=migrator t=2025-06-13T14:57:05.339051218Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=901.35µs grafana | logger=migrator t=2025-06-13T14:57:05.342170705Z level=info msg="Executing migration" id="Add column tags to annotation table" grafana | logger=migrator t=2025-06-13T14:57:05.346367725Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=4.19631ms grafana | logger=migrator t=2025-06-13T14:57:05.350892119Z level=info msg="Executing migration" id="Create annotation_tag table v2" grafana | logger=migrator t=2025-06-13T14:57:05.351675898Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=783.459µs grafana | logger=migrator t=2025-06-13T14:57:05.355086469Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" grafana | logger=migrator t=2025-06-13T14:57:05.3560471Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=959.901µs grafana | logger=migrator t=2025-06-13T14:57:05.359640843Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" grafana | logger=migrator t=2025-06-13T14:57:05.360552734Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=908.461µs grafana | logger=migrator t=2025-06-13T14:57:05.36530132Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" grafana | logger=migrator t=2025-06-13T14:57:05.37708148Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=11.77897ms grafana | logger=migrator t=2025-06-13T14:57:05.380491851Z level=info msg="Executing migration" id="Create annotation_tag table v3" grafana | logger=migrator t=2025-06-13T14:57:05.381084448Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=591.817µs grafana | logger=migrator t=2025-06-13T14:57:05.384477508Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" grafana | logger=migrator t=2025-06-13T14:57:05.385217087Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=739.239µs grafana | logger=migrator t=2025-06-13T14:57:05.389956943Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" grafana | logger=migrator t=2025-06-13T14:57:05.390500809Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=542.316µs grafana | logger=migrator t=2025-06-13T14:57:05.394133623Z level=info msg="Executing migration" id="drop table annotation_tag_v2" grafana | logger=migrator t=2025-06-13T14:57:05.395063244Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=928.381µs grafana | logger=migrator t=2025-06-13T14:57:05.398744657Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" grafana | logger=migrator t=2025-06-13T14:57:05.399138352Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=392.635µs grafana | logger=migrator t=2025-06-13T14:57:05.404209852Z level=info msg="Executing migration" id="Add created time to annotation table" grafana | logger=migrator t=2025-06-13T14:57:05.408524483Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=4.313961ms grafana | logger=migrator t=2025-06-13T14:57:05.418160158Z level=info msg="Executing migration" id="Add updated time to annotation table" grafana | logger=migrator t=2025-06-13T14:57:05.42258639Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=4.426092ms grafana | logger=migrator t=2025-06-13T14:57:05.42670978Z level=info msg="Executing migration" id="Add index for created in annotation table" grafana | logger=migrator t=2025-06-13T14:57:05.428163567Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=1.453017ms grafana | logger=migrator t=2025-06-13T14:57:05.433075755Z level=info msg="Executing migration" id="Add index for updated in annotation table" grafana | logger=migrator t=2025-06-13T14:57:05.434017346Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=941.281µs grafana | logger=migrator t=2025-06-13T14:57:05.437572849Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" grafana | logger=migrator t=2025-06-13T14:57:05.437844622Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=270.523µs grafana | logger=migrator t=2025-06-13T14:57:05.441461725Z level=info msg="Executing migration" id="Add epoch_end column" grafana | logger=migrator t=2025-06-13T14:57:05.448471318Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=7.008333ms grafana | logger=migrator t=2025-06-13T14:57:05.452317763Z level=info msg="Executing migration" id="Add index for epoch_end" grafana | logger=migrator t=2025-06-13T14:57:05.453259145Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=940.662µs grafana | logger=migrator t=2025-06-13T14:57:05.458442926Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" grafana | logger=migrator t=2025-06-13T14:57:05.458832411Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=388.465µs grafana | logger=migrator t=2025-06-13T14:57:05.462703587Z level=info msg="Executing migration" id="Move region to single row" grafana | logger=migrator t=2025-06-13T14:57:05.463368725Z level=info msg="Migration successfully executed" id="Move region to single row" duration=664.298µs grafana | logger=migrator t=2025-06-13T14:57:05.46720518Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" grafana | logger=migrator t=2025-06-13T14:57:05.468058941Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=853.641µs grafana | logger=migrator t=2025-06-13T14:57:05.472510834Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" grafana | logger=migrator t=2025-06-13T14:57:05.473368794Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=857.55µs grafana | logger=migrator t=2025-06-13T14:57:05.477604904Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" grafana | logger=migrator t=2025-06-13T14:57:05.478516215Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=911.001µs grafana | logger=migrator t=2025-06-13T14:57:05.485109063Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" grafana | logger=migrator t=2025-06-13T14:57:05.4865533Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=1.443357ms grafana | logger=migrator t=2025-06-13T14:57:05.491032074Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" grafana | logger=migrator t=2025-06-13T14:57:05.49244166Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=1.409377ms grafana | logger=migrator t=2025-06-13T14:57:05.495920531Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" grafana | logger=migrator t=2025-06-13T14:57:05.496861923Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=941.572µs grafana | logger=migrator t=2025-06-13T14:57:05.501237705Z level=info msg="Executing migration" id="Increase tags column to length 4096" grafana | logger=migrator t=2025-06-13T14:57:05.501255555Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=19.16µs grafana | logger=migrator t=2025-06-13T14:57:05.504666935Z level=info msg="Executing migration" id="Increase prev_state column to length 40 not null" grafana | logger=migrator t=2025-06-13T14:57:05.504693726Z level=info msg="Migration successfully executed" id="Increase prev_state column to length 40 not null" duration=32.221µs grafana | logger=migrator t=2025-06-13T14:57:05.507732922Z level=info msg="Executing migration" id="Increase new_state column to length 40 not null" grafana | logger=migrator t=2025-06-13T14:57:05.507757512Z level=info msg="Migration successfully executed" id="Increase new_state column to length 40 not null" duration=25.8µs grafana | logger=migrator t=2025-06-13T14:57:05.512173825Z level=info msg="Executing migration" id="create test_data table" grafana | logger=migrator t=2025-06-13T14:57:05.51347705Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.302415ms grafana | logger=migrator t=2025-06-13T14:57:05.519083917Z level=info msg="Executing migration" id="create dashboard_version table v1" grafana | logger=migrator t=2025-06-13T14:57:05.520413142Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=1.328116ms grafana | logger=migrator t=2025-06-13T14:57:05.535909397Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" grafana | logger=migrator t=2025-06-13T14:57:05.537488595Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=1.578559ms grafana | logger=migrator t=2025-06-13T14:57:05.541619924Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" grafana | logger=migrator t=2025-06-13T14:57:05.543099922Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=1.479718ms grafana | logger=migrator t=2025-06-13T14:57:05.54796001Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" grafana | logger=migrator t=2025-06-13T14:57:05.548176602Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=216.302µs grafana | logger=migrator t=2025-06-13T14:57:05.551442781Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" grafana | logger=migrator t=2025-06-13T14:57:05.551820945Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=377.364µs grafana | logger=migrator t=2025-06-13T14:57:05.556072926Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" grafana | logger=migrator t=2025-06-13T14:57:05.556090586Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=18.48µs grafana | logger=migrator t=2025-06-13T14:57:05.559469336Z level=info msg="Executing migration" id="Add apiVersion for dashboard_version" grafana | logger=migrator t=2025-06-13T14:57:05.563943649Z level=info msg="Migration successfully executed" id="Add apiVersion for dashboard_version" duration=4.475903ms grafana | logger=migrator t=2025-06-13T14:57:05.568358752Z level=info msg="Executing migration" id="create team table" grafana | logger=migrator t=2025-06-13T14:57:05.569161892Z level=info msg="Migration successfully executed" id="create team table" duration=802.76µs grafana | logger=migrator t=2025-06-13T14:57:05.572336459Z level=info msg="Executing migration" id="add index team.org_id" grafana | logger=migrator t=2025-06-13T14:57:05.573540673Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.202934ms grafana | logger=migrator t=2025-06-13T14:57:05.577314768Z level=info msg="Executing migration" id="add unique index team_org_id_name" grafana | logger=migrator t=2025-06-13T14:57:05.578751845Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.436377ms grafana | logger=migrator t=2025-06-13T14:57:05.582966775Z level=info msg="Executing migration" id="Add column uid in team" grafana | logger=migrator t=2025-06-13T14:57:05.587413788Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=4.446153ms grafana | logger=migrator t=2025-06-13T14:57:05.590865169Z level=info msg="Executing migration" id="Update uid column values in team" grafana | logger=migrator t=2025-06-13T14:57:05.591140473Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=274.724µs grafana | logger=migrator t=2025-06-13T14:57:05.59433922Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" grafana | logger=migrator t=2025-06-13T14:57:05.595302882Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=963.072µs grafana | logger=migrator t=2025-06-13T14:57:05.599734055Z level=info msg="Executing migration" id="Add column external_uid in team" grafana | logger=migrator t=2025-06-13T14:57:05.607007811Z level=info msg="Migration successfully executed" id="Add column external_uid in team" duration=7.268716ms grafana | logger=migrator t=2025-06-13T14:57:05.610468952Z level=info msg="Executing migration" id="Add column is_provisioned in team" grafana | logger=migrator t=2025-06-13T14:57:05.613732111Z level=info msg="Migration successfully executed" id="Add column is_provisioned in team" duration=3.262579ms grafana | logger=migrator t=2025-06-13T14:57:05.616782397Z level=info msg="Executing migration" id="create team member table" grafana | logger=migrator t=2025-06-13T14:57:05.617571896Z level=info msg="Migration successfully executed" id="create team member table" duration=788.469µs grafana | logger=migrator t=2025-06-13T14:57:05.621873128Z level=info msg="Executing migration" id="add index team_member.org_id" grafana | logger=migrator t=2025-06-13T14:57:05.623015821Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.141653ms grafana | logger=migrator t=2025-06-13T14:57:05.62629721Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" grafana | logger=migrator t=2025-06-13T14:57:05.627296222Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=997.912µs grafana | logger=migrator t=2025-06-13T14:57:05.630438969Z level=info msg="Executing migration" id="add index team_member.team_id" grafana | logger=migrator t=2025-06-13T14:57:05.63135663Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=917.451µs grafana | logger=migrator t=2025-06-13T14:57:05.635788383Z level=info msg="Executing migration" id="Add column email to team table" grafana | logger=migrator t=2025-06-13T14:57:05.641016895Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=5.227102ms grafana | logger=migrator t=2025-06-13T14:57:05.652970537Z level=info msg="Executing migration" id="Add column external to team_member table" grafana | logger=migrator t=2025-06-13T14:57:05.662165716Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=9.189489ms grafana | logger=migrator t=2025-06-13T14:57:05.670453974Z level=info msg="Executing migration" id="Add column permission to team_member table" grafana | logger=migrator t=2025-06-13T14:57:05.675914129Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=5.459695ms grafana | logger=migrator t=2025-06-13T14:57:05.681421415Z level=info msg="Executing migration" id="add unique index team_member_user_id_org_id" grafana | logger=migrator t=2025-06-13T14:57:05.682341476Z level=info msg="Migration successfully executed" id="add unique index team_member_user_id_org_id" duration=919.641µs grafana | logger=migrator t=2025-06-13T14:57:05.687468087Z level=info msg="Executing migration" id="create dashboard acl table" grafana | logger=migrator t=2025-06-13T14:57:05.688264916Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=796.649µs grafana | logger=migrator t=2025-06-13T14:57:05.692331894Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" grafana | logger=migrator t=2025-06-13T14:57:05.693203325Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=871.561µs grafana | logger=migrator t=2025-06-13T14:57:05.696382191Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" grafana | logger=migrator t=2025-06-13T14:57:05.697500825Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.117544ms grafana | logger=migrator t=2025-06-13T14:57:05.702785468Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" grafana | logger=migrator t=2025-06-13T14:57:05.704180464Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=1.394596ms grafana | logger=migrator t=2025-06-13T14:57:05.707781677Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" grafana | logger=migrator t=2025-06-13T14:57:05.708706788Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=916.341µs grafana | logger=migrator t=2025-06-13T14:57:05.713477984Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" grafana | logger=migrator t=2025-06-13T14:57:05.716908025Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=3.430001ms grafana | logger=migrator t=2025-06-13T14:57:05.722723714Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" grafana | logger=migrator t=2025-06-13T14:57:05.723636755Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=913.151µs grafana | logger=migrator t=2025-06-13T14:57:05.728628734Z level=info msg="Executing migration" id="add index dashboard_permission" grafana | logger=migrator t=2025-06-13T14:57:05.730314244Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.68889ms grafana | logger=migrator t=2025-06-13T14:57:05.734817258Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" grafana | logger=migrator t=2025-06-13T14:57:05.735573837Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=756.329µs grafana | logger=migrator t=2025-06-13T14:57:05.740482885Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" grafana | logger=migrator t=2025-06-13T14:57:05.740942771Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=465.606µs grafana | logger=migrator t=2025-06-13T14:57:05.744001947Z level=info msg="Executing migration" id="create tag table" grafana | logger=migrator t=2025-06-13T14:57:05.744761096Z level=info msg="Migration successfully executed" id="create tag table" duration=753.129µs grafana | logger=migrator t=2025-06-13T14:57:05.749930498Z level=info msg="Executing migration" id="add index tag.key_value" grafana | logger=migrator t=2025-06-13T14:57:05.75182741Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.895693ms grafana | logger=migrator t=2025-06-13T14:57:05.758016203Z level=info msg="Executing migration" id="create login attempt table" grafana | logger=migrator t=2025-06-13T14:57:05.758760422Z level=info msg="Migration successfully executed" id="create login attempt table" duration=744.249µs grafana | logger=migrator t=2025-06-13T14:57:05.76278038Z level=info msg="Executing migration" id="add index login_attempt.username" grafana | logger=migrator t=2025-06-13T14:57:05.763766242Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=985.602µs grafana | logger=migrator t=2025-06-13T14:57:05.775527912Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" grafana | logger=migrator t=2025-06-13T14:57:05.776822407Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=1.296016ms grafana | logger=migrator t=2025-06-13T14:57:05.780289728Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-13T14:57:05.796323668Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=16.03429ms grafana | logger=migrator t=2025-06-13T14:57:05.801710312Z level=info msg="Executing migration" id="create login_attempt v2" grafana | logger=migrator t=2025-06-13T14:57:05.802233909Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=523.547µs grafana | logger=migrator t=2025-06-13T14:57:05.806925934Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" grafana | logger=migrator t=2025-06-13T14:57:05.808338331Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=1.411787ms grafana | logger=migrator t=2025-06-13T14:57:05.81241572Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" grafana | logger=migrator t=2025-06-13T14:57:05.812871365Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=455.085µs grafana | logger=migrator t=2025-06-13T14:57:05.817366398Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" grafana | logger=migrator t=2025-06-13T14:57:05.817926355Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=559.727µs grafana | logger=migrator t=2025-06-13T14:57:05.822717602Z level=info msg="Executing migration" id="create user auth table" grafana | logger=migrator t=2025-06-13T14:57:05.823712624Z level=info msg="Migration successfully executed" id="create user auth table" duration=993.792µs grafana | logger=migrator t=2025-06-13T14:57:05.827915304Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" grafana | logger=migrator t=2025-06-13T14:57:05.829331121Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.413986ms grafana | logger=migrator t=2025-06-13T14:57:05.834024366Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" grafana | logger=migrator t=2025-06-13T14:57:05.834044867Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=19.491µs grafana | logger=migrator t=2025-06-13T14:57:05.840407772Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" grafana | logger=migrator t=2025-06-13T14:57:05.845589374Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=5.181002ms grafana | logger=migrator t=2025-06-13T14:57:05.851345152Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" grafana | logger=migrator t=2025-06-13T14:57:05.856931158Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=5.588286ms grafana | logger=migrator t=2025-06-13T14:57:05.861130238Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" grafana | logger=migrator t=2025-06-13T14:57:05.866399801Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=5.268913ms grafana | logger=migrator t=2025-06-13T14:57:05.872440073Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" grafana | logger=migrator t=2025-06-13T14:57:05.877609064Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=5.169012ms grafana | logger=migrator t=2025-06-13T14:57:05.889728878Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" grafana | logger=migrator t=2025-06-13T14:57:05.891254216Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=1.517688ms grafana | logger=migrator t=2025-06-13T14:57:05.895488536Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" grafana | logger=migrator t=2025-06-13T14:57:05.904552424Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=9.053028ms grafana | logger=migrator t=2025-06-13T14:57:05.91006387Z level=info msg="Executing migration" id="Add user_unique_id to user_auth" grafana | logger=migrator t=2025-06-13T14:57:05.914137258Z level=info msg="Migration successfully executed" id="Add user_unique_id to user_auth" duration=4.073828ms grafana | logger=migrator t=2025-06-13T14:57:05.917237835Z level=info msg="Executing migration" id="create server_lock table" grafana | logger=migrator t=2025-06-13T14:57:05.917776281Z level=info msg="Migration successfully executed" id="create server_lock table" duration=539.086µs grafana | logger=migrator t=2025-06-13T14:57:05.920606425Z level=info msg="Executing migration" id="add index server_lock.operation_uid" grafana | logger=migrator t=2025-06-13T14:57:05.921343034Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=733.829µs grafana | logger=migrator t=2025-06-13T14:57:05.927438686Z level=info msg="Executing migration" id="create user auth token table" grafana | logger=migrator t=2025-06-13T14:57:05.928865983Z level=info msg="Migration successfully executed" id="create user auth token table" duration=1.426997ms grafana | logger=migrator t=2025-06-13T14:57:05.932181492Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" grafana | logger=migrator t=2025-06-13T14:57:05.933210775Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.028383ms grafana | logger=migrator t=2025-06-13T14:57:05.936324492Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" grafana | logger=migrator t=2025-06-13T14:57:05.937319463Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=994.661µs grafana | logger=migrator t=2025-06-13T14:57:05.943418346Z level=info msg="Executing migration" id="add index user_auth_token.user_id" grafana | logger=migrator t=2025-06-13T14:57:05.944458698Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.039842ms grafana | logger=migrator t=2025-06-13T14:57:05.948338624Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" grafana | logger=migrator t=2025-06-13T14:57:05.954099803Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=5.760679ms grafana | logger=migrator t=2025-06-13T14:57:05.957964908Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" grafana | logger=migrator t=2025-06-13T14:57:05.95893297Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=967.962µs grafana | logger=migrator t=2025-06-13T14:57:05.965123614Z level=info msg="Executing migration" id="add external_session_id to user_auth_token" grafana | logger=migrator t=2025-06-13T14:57:05.970978093Z level=info msg="Migration successfully executed" id="add external_session_id to user_auth_token" duration=5.853749ms grafana | logger=migrator t=2025-06-13T14:57:05.974659587Z level=info msg="Executing migration" id="create cache_data table" grafana | logger=migrator t=2025-06-13T14:57:05.975584858Z level=info msg="Migration successfully executed" id="create cache_data table" duration=925.461µs grafana | logger=migrator t=2025-06-13T14:57:05.978775616Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" grafana | logger=migrator t=2025-06-13T14:57:05.979771338Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=995.232µs grafana | logger=migrator t=2025-06-13T14:57:05.986191964Z level=info msg="Executing migration" id="create short_url table v1" grafana | logger=migrator t=2025-06-13T14:57:05.987023444Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=831.2µs grafana | logger=migrator t=2025-06-13T14:57:05.991523547Z level=info msg="Executing migration" id="add index short_url.org_id-uid" grafana | logger=migrator t=2025-06-13T14:57:05.993035065Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=1.511798ms grafana | logger=migrator t=2025-06-13T14:57:06.005972739Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" grafana | logger=migrator t=2025-06-13T14:57:06.005991359Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=19.37µs grafana | logger=migrator t=2025-06-13T14:57:06.01196965Z level=info msg="Executing migration" id="delete alert_definition table" grafana | logger=migrator t=2025-06-13T14:57:06.012101722Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=132.232µs grafana | logger=migrator t=2025-06-13T14:57:06.02041039Z level=info msg="Executing migration" id="recreate alert_definition table" grafana | logger=migrator t=2025-06-13T14:57:06.021900438Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.490718ms grafana | logger=migrator t=2025-06-13T14:57:06.025870295Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" grafana | logger=migrator t=2025-06-13T14:57:06.026829677Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=959.192µs grafana | logger=migrator t=2025-06-13T14:57:06.030391939Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2025-06-13T14:57:06.031400121Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.007532ms grafana | logger=migrator t=2025-06-13T14:57:06.036716184Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-13T14:57:06.036733174Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=17.94µs grafana | logger=migrator t=2025-06-13T14:57:06.038813539Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" grafana | logger=migrator t=2025-06-13T14:57:06.039812231Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=998.292µs grafana | logger=migrator t=2025-06-13T14:57:06.043433584Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2025-06-13T14:57:06.044373645Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=939.921µs grafana | logger=migrator t=2025-06-13T14:57:06.050259556Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" grafana | logger=migrator t=2025-06-13T14:57:06.051287718Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.027742ms grafana | logger=migrator t=2025-06-13T14:57:06.055173344Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2025-06-13T14:57:06.056219527Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.045812ms grafana | logger=migrator t=2025-06-13T14:57:06.062770304Z level=info msg="Executing migration" id="Add column paused in alert_definition" grafana | logger=migrator t=2025-06-13T14:57:06.06911542Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=6.344776ms grafana | logger=migrator t=2025-06-13T14:57:06.073115898Z level=info msg="Executing migration" id="drop alert_definition table" grafana | logger=migrator t=2025-06-13T14:57:06.074074989Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=959.031µs grafana | logger=migrator t=2025-06-13T14:57:06.077974785Z level=info msg="Executing migration" id="delete alert_definition_version table" grafana | logger=migrator t=2025-06-13T14:57:06.078066776Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=92.101µs grafana | logger=migrator t=2025-06-13T14:57:06.084514533Z level=info msg="Executing migration" id="recreate alert_definition_version table" grafana | logger=migrator t=2025-06-13T14:57:06.085494125Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=979.292µs grafana | logger=migrator t=2025-06-13T14:57:06.089503722Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" grafana | logger=migrator t=2025-06-13T14:57:06.090557995Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.053733ms grafana | logger=migrator t=2025-06-13T14:57:06.094436371Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" grafana | logger=migrator t=2025-06-13T14:57:06.095467244Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.030383ms grafana | logger=migrator t=2025-06-13T14:57:06.102102732Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-13T14:57:06.102125463Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=23.521µs grafana | logger=migrator t=2025-06-13T14:57:06.105644814Z level=info msg="Executing migration" id="drop alert_definition_version table" grafana | logger=migrator t=2025-06-13T14:57:06.106614246Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=969.172µs grafana | logger=migrator t=2025-06-13T14:57:06.123293344Z level=info msg="Executing migration" id="create alert_instance table" grafana | logger=migrator t=2025-06-13T14:57:06.124927384Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.63426ms grafana | logger=migrator t=2025-06-13T14:57:06.129365997Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" grafana | logger=migrator t=2025-06-13T14:57:06.130905365Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.538758ms grafana | logger=migrator t=2025-06-13T14:57:06.137004618Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" grafana | logger=migrator t=2025-06-13T14:57:06.13800669Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.000102ms grafana | logger=migrator t=2025-06-13T14:57:06.141075616Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" grafana | logger=migrator t=2025-06-13T14:57:06.147403761Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=6.327535ms grafana | logger=migrator t=2025-06-13T14:57:06.150469708Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" grafana | logger=migrator t=2025-06-13T14:57:06.151439929Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=970.201µs grafana | logger=migrator t=2025-06-13T14:57:06.157616173Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" grafana | logger=migrator t=2025-06-13T14:57:06.158555884Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=935.831µs grafana | logger=migrator t=2025-06-13T14:57:06.162283799Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" grafana | logger=migrator t=2025-06-13T14:57:06.189500842Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=27.215693ms grafana | logger=migrator t=2025-06-13T14:57:06.196284153Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" grafana | logger=migrator t=2025-06-13T14:57:06.226183329Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=29.803755ms grafana | logger=migrator t=2025-06-13T14:57:06.241635413Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" grafana | logger=migrator t=2025-06-13T14:57:06.243489885Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=1.854862ms grafana | logger=migrator t=2025-06-13T14:57:06.248799298Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" grafana | logger=migrator t=2025-06-13T14:57:06.250347497Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=1.548448ms grafana | logger=migrator t=2025-06-13T14:57:06.259407374Z level=info msg="Executing migration" id="add current_reason column related to current_state" grafana | logger=migrator t=2025-06-13T14:57:06.268157748Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=8.748024ms grafana | logger=migrator t=2025-06-13T14:57:06.453961949Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" grafana | logger=migrator t=2025-06-13T14:57:06.462003165Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=8.044316ms grafana | logger=migrator t=2025-06-13T14:57:06.525115436Z level=info msg="Executing migration" id="create alert_rule table" grafana | logger=migrator t=2025-06-13T14:57:06.52964966Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=4.539544ms grafana | logger=migrator t=2025-06-13T14:57:06.541384889Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" grafana | logger=migrator t=2025-06-13T14:57:06.54226459Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=879.291µs grafana | logger=migrator t=2025-06-13T14:57:06.549155312Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" grafana | logger=migrator t=2025-06-13T14:57:06.550309566Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.153824ms grafana | logger=migrator t=2025-06-13T14:57:06.55407324Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" grafana | logger=migrator t=2025-06-13T14:57:06.55487635Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=802.75µs grafana | logger=migrator t=2025-06-13T14:57:06.558467303Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-13T14:57:06.558489023Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=19.67µs grafana | logger=migrator t=2025-06-13T14:57:06.565954812Z level=info msg="Executing migration" id="add column for to alert_rule" grafana | logger=migrator t=2025-06-13T14:57:06.582042233Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=16.086021ms grafana | logger=migrator t=2025-06-13T14:57:06.58514544Z level=info msg="Executing migration" id="add column annotations to alert_rule" grafana | logger=migrator t=2025-06-13T14:57:06.590773037Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=5.623607ms grafana | logger=migrator t=2025-06-13T14:57:06.593741082Z level=info msg="Executing migration" id="add column labels to alert_rule" grafana | logger=migrator t=2025-06-13T14:57:06.600182349Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=6.440697ms grafana | logger=migrator t=2025-06-13T14:57:06.606276531Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" grafana | logger=migrator t=2025-06-13T14:57:06.607363284Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=1.086853ms grafana | logger=migrator t=2025-06-13T14:57:06.610504682Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" grafana | logger=migrator t=2025-06-13T14:57:06.611699916Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.194994ms grafana | logger=migrator t=2025-06-13T14:57:06.614955145Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" grafana | logger=migrator t=2025-06-13T14:57:06.621118638Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=6.165173ms grafana | logger=migrator t=2025-06-13T14:57:06.627054319Z level=info msg="Executing migration" id="add panel_id column to alert_rule" grafana | logger=migrator t=2025-06-13T14:57:06.631485881Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=4.431152ms grafana | logger=migrator t=2025-06-13T14:57:06.641718413Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" grafana | logger=migrator t=2025-06-13T14:57:06.643474444Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=1.755561ms grafana | logger=migrator t=2025-06-13T14:57:06.647924807Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" grafana | logger=migrator t=2025-06-13T14:57:06.655870142Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=7.995825ms grafana | logger=migrator t=2025-06-13T14:57:06.662213147Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" grafana | logger=migrator t=2025-06-13T14:57:06.667427389Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=5.214342ms grafana | logger=migrator t=2025-06-13T14:57:06.671541368Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" grafana | logger=migrator t=2025-06-13T14:57:06.671555579Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=15.001µs grafana | logger=migrator t=2025-06-13T14:57:06.678214847Z level=info msg="Executing migration" id="create alert_rule_version table" grafana | logger=migrator t=2025-06-13T14:57:06.679544153Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.331956ms grafana | logger=migrator t=2025-06-13T14:57:06.68345112Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" grafana | logger=migrator t=2025-06-13T14:57:06.684479842Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.024962ms grafana | logger=migrator t=2025-06-13T14:57:06.690279261Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" grafana | logger=migrator t=2025-06-13T14:57:06.691387674Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.107713ms grafana | logger=migrator t=2025-06-13T14:57:06.695788817Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-13T14:57:06.695807957Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=19.941µs grafana | logger=migrator t=2025-06-13T14:57:06.699075206Z level=info msg="Executing migration" id="add column for to alert_rule_version" grafana | logger=migrator t=2025-06-13T14:57:06.70611893Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=7.043304ms grafana | logger=migrator t=2025-06-13T14:57:06.711439443Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" grafana | logger=migrator t=2025-06-13T14:57:06.71789945Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=6.459707ms grafana | logger=migrator t=2025-06-13T14:57:06.721015747Z level=info msg="Executing migration" id="add column labels to alert_rule_version" grafana | logger=migrator t=2025-06-13T14:57:06.727237021Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=6.220814ms grafana | logger=migrator t=2025-06-13T14:57:06.732746807Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" grafana | logger=migrator t=2025-06-13T14:57:06.737781826Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=5.037259ms grafana | logger=migrator t=2025-06-13T14:57:06.742030407Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" grafana | logger=migrator t=2025-06-13T14:57:06.748930979Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=6.899782ms grafana | logger=migrator t=2025-06-13T14:57:06.760553657Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" grafana | logger=migrator t=2025-06-13T14:57:06.760696349Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=143.632µs grafana | logger=migrator t=2025-06-13T14:57:06.764585345Z level=info msg="Executing migration" id=create_alert_configuration_table grafana | logger=migrator t=2025-06-13T14:57:06.765590197Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=1.007112ms grafana | logger=migrator t=2025-06-13T14:57:06.768678644Z level=info msg="Executing migration" id="Add column default in alert_configuration" grafana | logger=migrator t=2025-06-13T14:57:06.773421881Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=4.743107ms grafana | logger=migrator t=2025-06-13T14:57:06.777387718Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" grafana | logger=migrator t=2025-06-13T14:57:06.777401798Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=14.6µs grafana | logger=migrator t=2025-06-13T14:57:06.779551934Z level=info msg="Executing migration" id="add column org_id in alert_configuration" grafana | logger=migrator t=2025-06-13T14:57:06.784263739Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=4.691975ms grafana | logger=migrator t=2025-06-13T14:57:06.787049442Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" grafana | logger=migrator t=2025-06-13T14:57:06.787828782Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=778.85µs grafana | logger=migrator t=2025-06-13T14:57:06.790660936Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" grafana | logger=migrator t=2025-06-13T14:57:06.79520698Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=4.545714ms grafana | logger=migrator t=2025-06-13T14:57:06.799582182Z level=info msg="Executing migration" id=create_ngalert_configuration_table grafana | logger=migrator t=2025-06-13T14:57:06.800206549Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=623.797µs grafana | logger=migrator t=2025-06-13T14:57:06.803161925Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" grafana | logger=migrator t=2025-06-13T14:57:06.803967564Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=804.999µs grafana | logger=migrator t=2025-06-13T14:57:06.808168084Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" grafana | logger=migrator t=2025-06-13T14:57:06.814588Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=6.419566ms grafana | logger=migrator t=2025-06-13T14:57:06.819297616Z level=info msg="Executing migration" id="create provenance_type table" grafana | logger=migrator t=2025-06-13T14:57:06.820162437Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=865.631µs grafana | logger=migrator t=2025-06-13T14:57:06.823331815Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" grafana | logger=migrator t=2025-06-13T14:57:06.824368157Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.020422ms grafana | logger=migrator t=2025-06-13T14:57:06.82798268Z level=info msg="Executing migration" id="create alert_image table" grafana | logger=migrator t=2025-06-13T14:57:06.82884853Z level=info msg="Migration successfully executed" id="create alert_image table" duration=865.36µs grafana | logger=migrator t=2025-06-13T14:57:06.831799305Z level=info msg="Executing migration" id="add unique index on token to alert_image table" grafana | logger=migrator t=2025-06-13T14:57:06.832491693Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=692.298µs grafana | logger=migrator t=2025-06-13T14:57:06.83553597Z level=info msg="Executing migration" id="support longer URLs in alert_image table" grafana | logger=migrator t=2025-06-13T14:57:06.8355493Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=14.02µs grafana | logger=migrator t=2025-06-13T14:57:06.838592286Z level=info msg="Executing migration" id=create_alert_configuration_history_table grafana | logger=migrator t=2025-06-13T14:57:06.839253054Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=660.568µs grafana | logger=migrator t=2025-06-13T14:57:06.842929038Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" grafana | logger=migrator t=2025-06-13T14:57:06.843605086Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=675.948µs grafana | logger=migrator t=2025-06-13T14:57:06.846377299Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" grafana | logger=migrator t=2025-06-13T14:57:06.846658862Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" grafana | logger=migrator t=2025-06-13T14:57:06.85068856Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" grafana | logger=migrator t=2025-06-13T14:57:06.850945003Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=256.593µs grafana | logger=migrator t=2025-06-13T14:57:06.857649633Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" grafana | logger=migrator t=2025-06-13T14:57:06.858705075Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.055432ms grafana | logger=migrator t=2025-06-13T14:57:06.87422734Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" grafana | logger=migrator t=2025-06-13T14:57:06.882402127Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=8.175927ms grafana | logger=migrator t=2025-06-13T14:57:06.885479704Z level=info msg="Executing migration" id="create library_element table v1" grafana | logger=migrator t=2025-06-13T14:57:06.886478846Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=998.812µs grafana | logger=migrator t=2025-06-13T14:57:06.892122413Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" grafana | logger=migrator t=2025-06-13T14:57:06.893192706Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.070313ms grafana | logger=migrator t=2025-06-13T14:57:06.898252016Z level=info msg="Executing migration" id="create library_element_connection table v1" grafana | logger=migrator t=2025-06-13T14:57:06.898891523Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=639.467µs grafana | logger=migrator t=2025-06-13T14:57:06.902401365Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" grafana | logger=migrator t=2025-06-13T14:57:06.903142844Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=741.919µs grafana | logger=migrator t=2025-06-13T14:57:06.907357844Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" grafana | logger=migrator t=2025-06-13T14:57:06.908371786Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.016802ms grafana | logger=migrator t=2025-06-13T14:57:06.912807809Z level=info msg="Executing migration" id="increase max description length to 2048" grafana | logger=migrator t=2025-06-13T14:57:06.91283197Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=24.841µs grafana | logger=migrator t=2025-06-13T14:57:06.916492193Z level=info msg="Executing migration" id="alter library_element model to mediumtext" grafana | logger=migrator t=2025-06-13T14:57:06.916527083Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=35.84µs grafana | logger=migrator t=2025-06-13T14:57:06.921205709Z level=info msg="Executing migration" id="add library_element folder uid" grafana | logger=migrator t=2025-06-13T14:57:06.92967292Z level=info msg="Migration successfully executed" id="add library_element folder uid" duration=8.457821ms grafana | logger=migrator t=2025-06-13T14:57:06.936311879Z level=info msg="Executing migration" id="populate library_element folder_uid" grafana | logger=migrator t=2025-06-13T14:57:06.936846145Z level=info msg="Migration successfully executed" id="populate library_element folder_uid" duration=536.176µs grafana | logger=migrator t=2025-06-13T14:57:06.942991058Z level=info msg="Executing migration" id="add index library_element org_id-folder_uid-name-kind" grafana | logger=migrator t=2025-06-13T14:57:06.94398655Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_uid-name-kind" duration=995.442µs grafana | logger=migrator t=2025-06-13T14:57:06.947409801Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" grafana | logger=migrator t=2025-06-13T14:57:06.947752115Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=341.984µs grafana | logger=migrator t=2025-06-13T14:57:06.951872844Z level=info msg="Executing migration" id="create data_keys table" grafana | logger=migrator t=2025-06-13T14:57:06.952779115Z level=info msg="Migration successfully executed" id="create data_keys table" duration=905.651µs grafana | logger=migrator t=2025-06-13T14:57:06.955842541Z level=info msg="Executing migration" id="create secrets table" grafana | logger=migrator t=2025-06-13T14:57:06.956742232Z level=info msg="Migration successfully executed" id="create secrets table" duration=899.401µs grafana | logger=migrator t=2025-06-13T14:57:06.962363789Z level=info msg="Executing migration" id="rename data_keys name column to id" grafana | logger=migrator t=2025-06-13T14:57:06.990861778Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=28.494659ms grafana | logger=migrator t=2025-06-13T14:57:07.001208731Z level=info msg="Executing migration" id="add name column into data_keys" grafana | logger=migrator t=2025-06-13T14:57:07.008576819Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=7.368058ms grafana | logger=migrator t=2025-06-13T14:57:07.015633545Z level=info msg="Executing migration" id="copy data_keys id column values into name" grafana | logger=migrator t=2025-06-13T14:57:07.015891848Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=257.813µs grafana | logger=migrator t=2025-06-13T14:57:07.021668828Z level=info msg="Executing migration" id="rename data_keys name column to label" grafana | logger=migrator t=2025-06-13T14:57:07.050274452Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=28.605214ms grafana | logger=migrator t=2025-06-13T14:57:07.09328821Z level=info msg="Executing migration" id="rename data_keys id column back to name" grafana | logger=migrator t=2025-06-13T14:57:07.132791586Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=39.495436ms grafana | logger=migrator t=2025-06-13T14:57:07.137858217Z level=info msg="Executing migration" id="create kv_store table v1" grafana | logger=migrator t=2025-06-13T14:57:07.138608156Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=747.229µs grafana | logger=migrator t=2025-06-13T14:57:07.141933796Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" grafana | logger=migrator t=2025-06-13T14:57:07.143545425Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.611069ms grafana | logger=migrator t=2025-06-13T14:57:07.146945476Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" grafana | logger=migrator t=2025-06-13T14:57:07.14728825Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=315.034µs grafana | logger=migrator t=2025-06-13T14:57:07.150928774Z level=info msg="Executing migration" id="create permission table" grafana | logger=migrator t=2025-06-13T14:57:07.151914626Z level=info msg="Migration successfully executed" id="create permission table" duration=984.772µs grafana | logger=migrator t=2025-06-13T14:57:07.156031305Z level=info msg="Executing migration" id="add unique index permission.role_id" grafana | logger=migrator t=2025-06-13T14:57:07.157006657Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=975.362µs grafana | logger=migrator t=2025-06-13T14:57:07.160917184Z level=info msg="Executing migration" id="add unique index role_id_action_scope" grafana | logger=migrator t=2025-06-13T14:57:07.161949857Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.029443ms grafana | logger=migrator t=2025-06-13T14:57:07.165190626Z level=info msg="Executing migration" id="create role table" grafana | logger=migrator t=2025-06-13T14:57:07.166921227Z level=info msg="Migration successfully executed" id="create role table" duration=1.730081ms grafana | logger=migrator t=2025-06-13T14:57:07.173394835Z level=info msg="Executing migration" id="add column display_name" grafana | logger=migrator t=2025-06-13T14:57:07.179120844Z level=info msg="Migration successfully executed" id="add column display_name" duration=5.728089ms grafana | logger=migrator t=2025-06-13T14:57:07.185077866Z level=info msg="Executing migration" id="add column group_name" grafana | logger=migrator t=2025-06-13T14:57:07.190313089Z level=info msg="Migration successfully executed" id="add column group_name" duration=5.235073ms grafana | logger=migrator t=2025-06-13T14:57:07.193313355Z level=info msg="Executing migration" id="add index role.org_id" grafana | logger=migrator t=2025-06-13T14:57:07.194063684Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=749.959µs grafana | logger=migrator t=2025-06-13T14:57:07.197009239Z level=info msg="Executing migration" id="add unique index role_org_id_name" grafana | logger=migrator t=2025-06-13T14:57:07.197733188Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=723.619µs grafana | logger=migrator t=2025-06-13T14:57:07.203079492Z level=info msg="Executing migration" id="add index role_org_id_uid" grafana | logger=migrator t=2025-06-13T14:57:07.204206826Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.126884ms grafana | logger=migrator t=2025-06-13T14:57:07.2086757Z level=info msg="Executing migration" id="create team role table" grafana | logger=migrator t=2025-06-13T14:57:07.210064526Z level=info msg="Migration successfully executed" id="create team role table" duration=1.388376ms grafana | logger=migrator t=2025-06-13T14:57:07.213790521Z level=info msg="Executing migration" id="add index team_role.org_id" grafana | logger=migrator t=2025-06-13T14:57:07.215559923Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.776592ms grafana | logger=migrator t=2025-06-13T14:57:07.221831188Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" grafana | logger=migrator t=2025-06-13T14:57:07.222933941Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.102613ms grafana | logger=migrator t=2025-06-13T14:57:07.22697673Z level=info msg="Executing migration" id="add index team_role.team_id" grafana | logger=migrator t=2025-06-13T14:57:07.228713951Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.736411ms grafana | logger=migrator t=2025-06-13T14:57:07.261542907Z level=info msg="Executing migration" id="create user role table" grafana | logger=migrator t=2025-06-13T14:57:07.262933233Z level=info msg="Migration successfully executed" id="create user role table" duration=1.390216ms grafana | logger=migrator t=2025-06-13T14:57:07.269615124Z level=info msg="Executing migration" id="add index user_role.org_id" grafana | logger=migrator t=2025-06-13T14:57:07.270815568Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.203064ms grafana | logger=migrator t=2025-06-13T14:57:07.327448851Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" grafana | logger=migrator t=2025-06-13T14:57:07.329541086Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=2.091115ms grafana | logger=migrator t=2025-06-13T14:57:07.339744959Z level=info msg="Executing migration" id="add index user_role.user_id" grafana | logger=migrator t=2025-06-13T14:57:07.341689822Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.946183ms grafana | logger=migrator t=2025-06-13T14:57:07.356957766Z level=info msg="Executing migration" id="create builtin role table" grafana | logger=migrator t=2025-06-13T14:57:07.358431184Z level=info msg="Migration successfully executed" id="create builtin role table" duration=1.469708ms grafana | logger=migrator t=2025-06-13T14:57:07.362169239Z level=info msg="Executing migration" id="add index builtin_role.role_id" grafana | logger=migrator t=2025-06-13T14:57:07.363289103Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.119534ms grafana | logger=migrator t=2025-06-13T14:57:07.366523682Z level=info msg="Executing migration" id="add index builtin_role.name" grafana | logger=migrator t=2025-06-13T14:57:07.367673646Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.149784ms grafana | logger=migrator t=2025-06-13T14:57:07.37220044Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" grafana | logger=migrator t=2025-06-13T14:57:07.381018826Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=8.816966ms grafana | logger=migrator t=2025-06-13T14:57:07.384576719Z level=info msg="Executing migration" id="add index builtin_role.org_id" grafana | logger=migrator t=2025-06-13T14:57:07.385771703Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.195094ms grafana | logger=migrator t=2025-06-13T14:57:07.389086813Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" grafana | logger=migrator t=2025-06-13T14:57:07.390317528Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.229955ms grafana | logger=migrator t=2025-06-13T14:57:07.393721349Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" grafana | logger=migrator t=2025-06-13T14:57:07.394855603Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.133954ms grafana | logger=migrator t=2025-06-13T14:57:07.398939992Z level=info msg="Executing migration" id="add unique index role.uid" grafana | logger=migrator t=2025-06-13T14:57:07.400171027Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.230155ms grafana | logger=migrator t=2025-06-13T14:57:07.404103224Z level=info msg="Executing migration" id="create seed assignment table" grafana | logger=migrator t=2025-06-13T14:57:07.404952324Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=848.69µs grafana | logger=migrator t=2025-06-13T14:57:07.408921522Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" grafana | logger=migrator t=2025-06-13T14:57:07.410159647Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.237495ms grafana | logger=migrator t=2025-06-13T14:57:07.413418596Z level=info msg="Executing migration" id="add column hidden to role table" grafana | logger=migrator t=2025-06-13T14:57:07.423596539Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=10.175163ms grafana | logger=migrator t=2025-06-13T14:57:07.426826098Z level=info msg="Executing migration" id="permission kind migration" grafana | logger=migrator t=2025-06-13T14:57:07.435602494Z level=info msg="Migration successfully executed" id="permission kind migration" duration=8.774066ms grafana | logger=migrator t=2025-06-13T14:57:07.439091666Z level=info msg="Executing migration" id="permission attribute migration" grafana | logger=migrator t=2025-06-13T14:57:07.444918066Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=5.82581ms grafana | logger=migrator t=2025-06-13T14:57:07.449187427Z level=info msg="Executing migration" id="permission identifier migration" grafana | logger=migrator t=2025-06-13T14:57:07.454993997Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=5.80549ms grafana | logger=migrator t=2025-06-13T14:57:07.458461509Z level=info msg="Executing migration" id="add permission identifier index" grafana | logger=migrator t=2025-06-13T14:57:07.45933215Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=870.831µs grafana | logger=migrator t=2025-06-13T14:57:07.464836776Z level=info msg="Executing migration" id="add permission action scope role_id index" grafana | logger=migrator t=2025-06-13T14:57:07.466941791Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=2.102695ms grafana | logger=migrator t=2025-06-13T14:57:07.510654378Z level=info msg="Executing migration" id="remove permission role_id action scope index" grafana | logger=migrator t=2025-06-13T14:57:07.512614332Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.961454ms grafana | logger=migrator t=2025-06-13T14:57:07.518112108Z level=info msg="Executing migration" id="add group mapping UID column to user_role table" grafana | logger=migrator t=2025-06-13T14:57:07.526156965Z level=info msg="Migration successfully executed" id="add group mapping UID column to user_role table" duration=8.045217ms grafana | logger=migrator t=2025-06-13T14:57:07.530703779Z level=info msg="Executing migration" id="add user_role org ID, user ID, role ID, group mapping UID index" grafana | logger=migrator t=2025-06-13T14:57:07.53157195Z level=info msg="Migration successfully executed" id="add user_role org ID, user ID, role ID, group mapping UID index" duration=867.971µs grafana | logger=migrator t=2025-06-13T14:57:07.535153813Z level=info msg="Executing migration" id="remove user_role org ID, user ID, role ID index" grafana | logger=migrator t=2025-06-13T14:57:07.536293117Z level=info msg="Migration successfully executed" id="remove user_role org ID, user ID, role ID index" duration=1.134014ms grafana | logger=migrator t=2025-06-13T14:57:07.539723858Z level=info msg="Executing migration" id="create query_history table v1" grafana | logger=migrator t=2025-06-13T14:57:07.54072136Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=996.792µs grafana | logger=migrator t=2025-06-13T14:57:07.546479329Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" grafana | logger=migrator t=2025-06-13T14:57:07.547593923Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.119304ms grafana | logger=migrator t=2025-06-13T14:57:07.554144672Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" grafana | logger=migrator t=2025-06-13T14:57:07.554174072Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=30.62µs grafana | logger=migrator t=2025-06-13T14:57:07.557898027Z level=info msg="Executing migration" id="create query_history_details table v1" grafana | logger=migrator t=2025-06-13T14:57:07.559451536Z level=info msg="Migration successfully executed" id="create query_history_details table v1" duration=1.552369ms grafana | logger=migrator t=2025-06-13T14:57:07.563798048Z level=info msg="Executing migration" id="rbac disabled migrator" grafana | logger=migrator t=2025-06-13T14:57:07.563833199Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=36.151µs grafana | logger=migrator t=2025-06-13T14:57:07.567032447Z level=info msg="Executing migration" id="teams permissions migration" grafana | logger=migrator t=2025-06-13T14:57:07.567331771Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=299.204µs grafana | logger=migrator t=2025-06-13T14:57:07.570433608Z level=info msg="Executing migration" id="dashboard permissions" grafana | logger=migrator t=2025-06-13T14:57:07.571212858Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=779.11µs grafana | logger=migrator t=2025-06-13T14:57:07.574867572Z level=info msg="Executing migration" id="dashboard permissions uid scopes" grafana | logger=migrator t=2025-06-13T14:57:07.575865484Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=997.732µs grafana | logger=migrator t=2025-06-13T14:57:07.580186726Z level=info msg="Executing migration" id="drop managed folder create actions" grafana | logger=migrator t=2025-06-13T14:57:07.580422358Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=235.632µs grafana | logger=migrator t=2025-06-13T14:57:07.585283517Z level=info msg="Executing migration" id="alerting notification permissions" grafana | logger=migrator t=2025-06-13T14:57:07.585715172Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=431.815µs grafana | logger=migrator t=2025-06-13T14:57:07.599123584Z level=info msg="Executing migration" id="create query_history_star table v1" grafana | logger=migrator t=2025-06-13T14:57:07.600670702Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=1.548598ms grafana | logger=migrator t=2025-06-13T14:57:07.606932228Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" grafana | logger=migrator t=2025-06-13T14:57:07.608164053Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.231105ms grafana | logger=migrator t=2025-06-13T14:57:07.61291079Z level=info msg="Executing migration" id="add column org_id in query_history_star" grafana | logger=migrator t=2025-06-13T14:57:07.625137947Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=12.224167ms grafana | logger=migrator t=2025-06-13T14:57:07.629291527Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" grafana | logger=migrator t=2025-06-13T14:57:07.629306857Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=16.11µs grafana | logger=migrator t=2025-06-13T14:57:07.634339708Z level=info msg="Executing migration" id="create correlation table v1" grafana | logger=migrator t=2025-06-13T14:57:07.635650484Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.312186ms grafana | logger=migrator t=2025-06-13T14:57:07.644785134Z level=info msg="Executing migration" id="add index correlations.uid" grafana | logger=migrator t=2025-06-13T14:57:07.645876937Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.091623ms grafana | logger=migrator t=2025-06-13T14:57:07.660682225Z level=info msg="Executing migration" id="add index correlations.source_uid" grafana | logger=migrator t=2025-06-13T14:57:07.661486935Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=804.7µs grafana | logger=migrator t=2025-06-13T14:57:07.66603449Z level=info msg="Executing migration" id="add correlation config column" grafana | logger=migrator t=2025-06-13T14:57:07.67271195Z level=info msg="Migration successfully executed" id="add correlation config column" duration=6.67657ms grafana | logger=migrator t=2025-06-13T14:57:07.675663306Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" grafana | logger=migrator t=2025-06-13T14:57:07.677107263Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.443437ms grafana | logger=migrator t=2025-06-13T14:57:07.686171523Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" grafana | logger=migrator t=2025-06-13T14:57:07.687172305Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.000542ms grafana | logger=migrator t=2025-06-13T14:57:07.690481945Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-13T14:57:07.71257189Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=22.085626ms grafana | logger=migrator t=2025-06-13T14:57:07.718998638Z level=info msg="Executing migration" id="create correlation v2" grafana | logger=migrator t=2025-06-13T14:57:07.720590597Z level=info msg="Migration successfully executed" id="create correlation v2" duration=1.593739ms grafana | logger=migrator t=2025-06-13T14:57:07.725545437Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" grafana | logger=migrator t=2025-06-13T14:57:07.726807202Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.261775ms grafana | logger=migrator t=2025-06-13T14:57:07.736777402Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" grafana | logger=migrator t=2025-06-13T14:57:07.738119319Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.344197ms grafana | logger=migrator t=2025-06-13T14:57:07.741320527Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" grafana | logger=migrator t=2025-06-13T14:57:07.742564532Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.244185ms grafana | logger=migrator t=2025-06-13T14:57:07.748261411Z level=info msg="Executing migration" id="copy correlation v1 to v2" grafana | logger=migrator t=2025-06-13T14:57:07.748596455Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=337.624µs grafana | logger=migrator t=2025-06-13T14:57:07.751955785Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" grafana | logger=migrator t=2025-06-13T14:57:07.752626613Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=670.488µs grafana | logger=migrator t=2025-06-13T14:57:07.756721803Z level=info msg="Executing migration" id="add provisioning column" grafana | logger=migrator t=2025-06-13T14:57:07.762898337Z level=info msg="Migration successfully executed" id="add provisioning column" duration=6.174534ms grafana | logger=migrator t=2025-06-13T14:57:07.768741067Z level=info msg="Executing migration" id="add type column" grafana | logger=migrator t=2025-06-13T14:57:07.777206709Z level=info msg="Migration successfully executed" id="add type column" duration=8.465862ms grafana | logger=migrator t=2025-06-13T14:57:07.783605806Z level=info msg="Executing migration" id="create entity_events table" grafana | logger=migrator t=2025-06-13T14:57:07.78476929Z level=info msg="Migration successfully executed" id="create entity_events table" duration=1.164224ms grafana | logger=migrator t=2025-06-13T14:57:07.789197594Z level=info msg="Executing migration" id="create dashboard public config v1" grafana | logger=migrator t=2025-06-13T14:57:07.790215966Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.018162ms grafana | logger=migrator t=2025-06-13T14:57:07.794921193Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2025-06-13T14:57:07.795739163Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2025-06-13T14:57:07.811374501Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2025-06-13T14:57:07.8120927Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2025-06-13T14:57:07.81543423Z level=info msg="Executing migration" id="Drop old dashboard public config table" grafana | logger=migrator t=2025-06-13T14:57:07.816228679Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=796.429µs grafana | logger=migrator t=2025-06-13T14:57:07.82870023Z level=info msg="Executing migration" id="recreate dashboard public config v1" grafana | logger=migrator t=2025-06-13T14:57:07.82952486Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=824.43µs grafana | logger=migrator t=2025-06-13T14:57:07.832474985Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2025-06-13T14:57:07.833267375Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=794.31µs grafana | logger=migrator t=2025-06-13T14:57:07.836097009Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2025-06-13T14:57:07.83706453Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=967.501µs grafana | logger=migrator t=2025-06-13T14:57:07.845937477Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" grafana | logger=migrator t=2025-06-13T14:57:07.847061051Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.124324ms grafana | logger=migrator t=2025-06-13T14:57:07.850452912Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" grafana | logger=migrator t=2025-06-13T14:57:07.851497774Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.041683ms grafana | logger=migrator t=2025-06-13T14:57:07.854693083Z level=info msg="Executing migration" id="Drop public config table" grafana | logger=migrator t=2025-06-13T14:57:07.855470442Z level=info msg="Migration successfully executed" id="Drop public config table" duration=776.859µs grafana | logger=migrator t=2025-06-13T14:57:07.862448216Z level=info msg="Executing migration" id="Recreate dashboard public config v2" grafana | logger=migrator t=2025-06-13T14:57:07.86360339Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.152304ms grafana | logger=migrator t=2025-06-13T14:57:07.866454635Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" grafana | logger=migrator t=2025-06-13T14:57:07.86776813Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.269895ms grafana | logger=migrator t=2025-06-13T14:57:07.872442227Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" grafana | logger=migrator t=2025-06-13T14:57:07.87352359Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.080693ms grafana | logger=migrator t=2025-06-13T14:57:07.877643089Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" grafana | logger=migrator t=2025-06-13T14:57:07.878715312Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.071623ms grafana | logger=migrator t=2025-06-13T14:57:07.881957551Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" grafana | logger=migrator t=2025-06-13T14:57:07.909026307Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=27.068056ms grafana | logger=migrator t=2025-06-13T14:57:07.999800111Z level=info msg="Executing migration" id="add annotations_enabled column" grafana | logger=migrator t=2025-06-13T14:57:08.008410615Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=8.612104ms grafana | logger=migrator t=2025-06-13T14:57:08.094693311Z level=info msg="Executing migration" id="add time_selection_enabled column" grafana | logger=migrator t=2025-06-13T14:57:08.103378868Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=8.690787ms grafana | logger=migrator t=2025-06-13T14:57:08.165971722Z level=info msg="Executing migration" id="delete orphaned public dashboards" grafana | logger=migrator t=2025-06-13T14:57:08.166250315Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=281.153µs grafana | logger=migrator t=2025-06-13T14:57:08.221115493Z level=info msg="Executing migration" id="add share column" grafana | logger=migrator t=2025-06-13T14:57:08.227605543Z level=info msg="Migration successfully executed" id="add share column" duration=6.49203ms grafana | logger=migrator t=2025-06-13T14:57:08.240772796Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" grafana | logger=migrator t=2025-06-13T14:57:08.241022729Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=251.903µs grafana | logger=migrator t=2025-06-13T14:57:08.243153905Z level=info msg="Executing migration" id="create file table" grafana | logger=migrator t=2025-06-13T14:57:08.244282419Z level=info msg="Migration successfully executed" id="create file table" duration=1.128884ms grafana | logger=migrator t=2025-06-13T14:57:08.247013903Z level=info msg="Executing migration" id="file table idx: path natural pk" grafana | logger=migrator t=2025-06-13T14:57:08.248204578Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.190735ms grafana | logger=migrator t=2025-06-13T14:57:08.251314446Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" grafana | logger=migrator t=2025-06-13T14:57:08.252500151Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.185115ms grafana | logger=migrator t=2025-06-13T14:57:08.258054209Z level=info msg="Executing migration" id="create file_meta table" grafana | logger=migrator t=2025-06-13T14:57:08.258872049Z level=info msg="Migration successfully executed" id="create file_meta table" duration=817.52µs grafana | logger=migrator t=2025-06-13T14:57:08.261600963Z level=info msg="Executing migration" id="file table idx: path key" grafana | logger=migrator t=2025-06-13T14:57:08.262736227Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.134744ms grafana | logger=migrator t=2025-06-13T14:57:08.268326086Z level=info msg="Executing migration" id="set path collation in file table" grafana | logger=migrator t=2025-06-13T14:57:08.268366577Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=41.791µs grafana | logger=migrator t=2025-06-13T14:57:08.273686433Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" grafana | logger=migrator t=2025-06-13T14:57:08.273705483Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=20.091µs grafana | logger=migrator t=2025-06-13T14:57:08.275973581Z level=info msg="Executing migration" id="managed permissions migration" grafana | logger=migrator t=2025-06-13T14:57:08.276577808Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=604.127µs grafana | logger=migrator t=2025-06-13T14:57:08.27914419Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" grafana | logger=migrator t=2025-06-13T14:57:08.279381733Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=237.473µs grafana | logger=migrator t=2025-06-13T14:57:08.282225158Z level=info msg="Executing migration" id="RBAC action name migrator" grafana | logger=migrator t=2025-06-13T14:57:08.283599795Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.374217ms grafana | logger=migrator t=2025-06-13T14:57:08.288421435Z level=info msg="Executing migration" id="Add UID column to playlist" grafana | logger=migrator t=2025-06-13T14:57:08.297864991Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=9.442486ms grafana | logger=migrator t=2025-06-13T14:57:08.303495951Z level=info msg="Executing migration" id="Update uid column values in playlist" grafana | logger=migrator t=2025-06-13T14:57:08.303667373Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=169.102µs grafana | logger=migrator t=2025-06-13T14:57:08.306378416Z level=info msg="Executing migration" id="Add index for uid in playlist" grafana | logger=migrator t=2025-06-13T14:57:08.307371489Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=992.693µs grafana | logger=migrator t=2025-06-13T14:57:08.315422709Z level=info msg="Executing migration" id="update group index for alert rules" grafana | logger=migrator t=2025-06-13T14:57:08.316208368Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=786.53µs grafana | logger=migrator t=2025-06-13T14:57:08.319437028Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" grafana | logger=migrator t=2025-06-13T14:57:08.319854153Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=416.475µs grafana | logger=migrator t=2025-06-13T14:57:08.333461521Z level=info msg="Executing migration" id="admin only folder/dashboard permission" grafana | logger=migrator t=2025-06-13T14:57:08.33416553Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=704.779µs grafana | logger=migrator t=2025-06-13T14:57:08.338430102Z level=info msg="Executing migration" id="add action column to seed_assignment" grafana | logger=migrator t=2025-06-13T14:57:08.348575018Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=10.144926ms grafana | logger=migrator t=2025-06-13T14:57:08.353496369Z level=info msg="Executing migration" id="add scope column to seed_assignment" grafana | logger=migrator t=2025-06-13T14:57:08.361964033Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=8.466464ms grafana | logger=migrator t=2025-06-13T14:57:08.366009593Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" grafana | logger=migrator t=2025-06-13T14:57:08.366795433Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=786.05µs grafana | logger=migrator t=2025-06-13T14:57:08.370704662Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" grafana | logger=migrator t=2025-06-13T14:57:08.447954586Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=77.245365ms grafana | logger=migrator t=2025-06-13T14:57:08.459946224Z level=info msg="Executing migration" id="add unique index builtin_role_name back" grafana | logger=migrator t=2025-06-13T14:57:08.461918009Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=1.972924ms grafana | logger=migrator t=2025-06-13T14:57:08.467251204Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" grafana | logger=migrator t=2025-06-13T14:57:08.469030656Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.779002ms grafana | logger=migrator t=2025-06-13T14:57:08.476221055Z level=info msg="Executing migration" id="add primary key to seed_assigment" grafana | logger=migrator t=2025-06-13T14:57:08.506293227Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=30.070072ms grafana | logger=migrator t=2025-06-13T14:57:08.510172595Z level=info msg="Executing migration" id="add origin column to seed_assignment" grafana | logger=migrator t=2025-06-13T14:57:08.519106035Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=8.92701ms grafana | logger=migrator t=2025-06-13T14:57:08.525810618Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" grafana | logger=migrator t=2025-06-13T14:57:08.526166402Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=356.644µs grafana | logger=migrator t=2025-06-13T14:57:08.530418195Z level=info msg="Executing migration" id="prevent seeding OnCall access" grafana | logger=migrator t=2025-06-13T14:57:08.530613747Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=195.732µs grafana | logger=migrator t=2025-06-13T14:57:08.537702765Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" grafana | logger=migrator t=2025-06-13T14:57:08.537901137Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=198.532µs grafana | logger=migrator t=2025-06-13T14:57:08.543612338Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" grafana | logger=migrator t=2025-06-13T14:57:08.543942662Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=331.094µs grafana | logger=migrator t=2025-06-13T14:57:08.547350604Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" grafana | logger=migrator t=2025-06-13T14:57:08.547659648Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=309.104µs grafana | logger=migrator t=2025-06-13T14:57:08.552296175Z level=info msg="Executing migration" id="create folder table" grafana | logger=migrator t=2025-06-13T14:57:08.553344888Z level=info msg="Migration successfully executed" id="create folder table" duration=1.048363ms grafana | logger=migrator t=2025-06-13T14:57:08.5575833Z level=info msg="Executing migration" id="Add index for parent_uid" grafana | logger=migrator t=2025-06-13T14:57:08.558775185Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.191745ms grafana | logger=migrator t=2025-06-13T14:57:08.572998931Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" grafana | logger=migrator t=2025-06-13T14:57:08.574035094Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.037983ms grafana | logger=migrator t=2025-06-13T14:57:08.582794572Z level=info msg="Executing migration" id="Update folder title length" grafana | logger=migrator t=2025-06-13T14:57:08.582840192Z level=info msg="Migration successfully executed" id="Update folder title length" duration=48.4µs grafana | logger=migrator t=2025-06-13T14:57:08.588557303Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" grafana | logger=migrator t=2025-06-13T14:57:08.589549556Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=992.143µs grafana | logger=migrator t=2025-06-13T14:57:08.592532772Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" grafana | logger=migrator t=2025-06-13T14:57:08.593330082Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=797.63µs grafana | logger=migrator t=2025-06-13T14:57:08.598054991Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" grafana | logger=migrator t=2025-06-13T14:57:08.598868201Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=813.15µs grafana | logger=migrator t=2025-06-13T14:57:08.602557236Z level=info msg="Executing migration" id="Sync dashboard and folder table" grafana | logger=migrator t=2025-06-13T14:57:08.60286859Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=311.504µs grafana | logger=migrator t=2025-06-13T14:57:08.605440292Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" grafana | logger=migrator t=2025-06-13T14:57:08.605622924Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=182.792µs grafana | logger=migrator t=2025-06-13T14:57:08.609196648Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" grafana | logger=migrator t=2025-06-13T14:57:08.609968998Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=772.14µs grafana | logger=migrator t=2025-06-13T14:57:08.617106616Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" grafana | logger=migrator t=2025-06-13T14:57:08.617912036Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=802.81µs grafana | logger=migrator t=2025-06-13T14:57:08.62547972Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" grafana | logger=migrator t=2025-06-13T14:57:08.626254709Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=774.959µs grafana | logger=migrator t=2025-06-13T14:57:08.632052691Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" grafana | logger=migrator t=2025-06-13T14:57:08.632984412Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=931.441µs grafana | logger=migrator t=2025-06-13T14:57:08.636993332Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" grafana | logger=migrator t=2025-06-13T14:57:08.637864383Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=871.041µs grafana | logger=migrator t=2025-06-13T14:57:08.640620477Z level=info msg="Executing migration" id="Remove unique index UQE_folder_org_id_parent_uid_title" grafana | logger=migrator t=2025-06-13T14:57:08.641464587Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_org_id_parent_uid_title" duration=843.69µs grafana | logger=migrator t=2025-06-13T14:57:08.647629743Z level=info msg="Executing migration" id="create anon_device table" grafana | logger=migrator t=2025-06-13T14:57:08.648267641Z level=info msg="Migration successfully executed" id="create anon_device table" duration=637.938µs grafana | logger=migrator t=2025-06-13T14:57:08.65146515Z level=info msg="Executing migration" id="add unique index anon_device.device_id" grafana | logger=migrator t=2025-06-13T14:57:08.652288631Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=823.651µs grafana | logger=migrator t=2025-06-13T14:57:08.65949108Z level=info msg="Executing migration" id="add index anon_device.updated_at" grafana | logger=migrator t=2025-06-13T14:57:08.660677984Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.188624ms grafana | logger=migrator t=2025-06-13T14:57:08.666627458Z level=info msg="Executing migration" id="create signing_key table" grafana | logger=migrator t=2025-06-13T14:57:08.66760635Z level=info msg="Migration successfully executed" id="create signing_key table" duration=980.112µs grafana | logger=migrator t=2025-06-13T14:57:08.670661778Z level=info msg="Executing migration" id="add unique index signing_key.key_id" grafana | logger=migrator t=2025-06-13T14:57:08.671545989Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=884.211µs grafana | logger=migrator t=2025-06-13T14:57:08.67409938Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" grafana | logger=migrator t=2025-06-13T14:57:08.674953201Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=853.031µs grafana | logger=migrator t=2025-06-13T14:57:08.684034073Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" grafana | logger=migrator t=2025-06-13T14:57:08.684301756Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=268.723µs grafana | logger=migrator t=2025-06-13T14:57:08.704407805Z level=info msg="Executing migration" id="Add folder_uid for dashboard" grafana | logger=migrator t=2025-06-13T14:57:08.71213468Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=7.736945ms grafana | logger=migrator t=2025-06-13T14:57:08.715309459Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" grafana | logger=migrator t=2025-06-13T14:57:08.715861126Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=547.807µs grafana | logger=migrator t=2025-06-13T14:57:08.721483216Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" grafana | logger=migrator t=2025-06-13T14:57:08.721498476Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=15.97µs grafana | logger=migrator t=2025-06-13T14:57:08.723697363Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" grafana | logger=migrator t=2025-06-13T14:57:08.724572844Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=844.42µs grafana | logger=migrator t=2025-06-13T14:57:08.727765043Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" grafana | logger=migrator t=2025-06-13T14:57:08.727778274Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=13.571µs grafana | logger=migrator t=2025-06-13T14:57:08.733459294Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" grafana | logger=migrator t=2025-06-13T14:57:08.734564767Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.105223ms grafana | logger=migrator t=2025-06-13T14:57:08.737814138Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" grafana | logger=migrator t=2025-06-13T14:57:08.738688278Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=873.66µs grafana | logger=migrator t=2025-06-13T14:57:08.742949021Z level=info msg="Executing migration" id="Remove unique index for dashboard_org_id_folder_uid_title_is_folder" grafana | logger=migrator t=2025-06-13T14:57:08.743789162Z level=info msg="Migration successfully executed" id="Remove unique index for dashboard_org_id_folder_uid_title_is_folder" duration=841.611µs grafana | logger=migrator t=2025-06-13T14:57:08.7493453Z level=info msg="Executing migration" id="create sso_setting table" grafana | logger=migrator t=2025-06-13T14:57:08.7501148Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=769.87µs grafana | logger=migrator t=2025-06-13T14:57:08.754419433Z level=info msg="Executing migration" id="copy kvstore migration status to each org" grafana | logger=migrator t=2025-06-13T14:57:08.75500726Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=588.217µs grafana | logger=migrator t=2025-06-13T14:57:08.757841655Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" grafana | logger=migrator t=2025-06-13T14:57:08.758046748Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=205.353µs grafana | logger=migrator t=2025-06-13T14:57:08.764573368Z level=info msg="Executing migration" id="managed dashboard permissions annotation actions migration" grafana | logger=migrator t=2025-06-13T14:57:08.765291997Z level=info msg="Migration successfully executed" id="managed dashboard permissions annotation actions migration" duration=718.889µs grafana | logger=migrator t=2025-06-13T14:57:08.768454046Z level=info msg="Executing migration" id="create cloud_migration table v1" grafana | logger=migrator t=2025-06-13T14:57:08.769144565Z level=info msg="Migration successfully executed" id="create cloud_migration table v1" duration=690.449µs grafana | logger=migrator t=2025-06-13T14:57:08.772155422Z level=info msg="Executing migration" id="create cloud_migration_run table v1" grafana | logger=migrator t=2025-06-13T14:57:08.77284527Z level=info msg="Migration successfully executed" id="create cloud_migration_run table v1" duration=691.478µs grafana | logger=migrator t=2025-06-13T14:57:08.779568593Z level=info msg="Executing migration" id="add stack_id column" grafana | logger=migrator t=2025-06-13T14:57:08.787156137Z level=info msg="Migration successfully executed" id="add stack_id column" duration=7.587684ms grafana | logger=migrator t=2025-06-13T14:57:08.790054333Z level=info msg="Executing migration" id="add region_slug column" grafana | logger=migrator t=2025-06-13T14:57:08.796685135Z level=info msg="Migration successfully executed" id="add region_slug column" duration=6.630412ms grafana | logger=migrator t=2025-06-13T14:57:08.800427491Z level=info msg="Executing migration" id="add cluster_slug column" grafana | logger=migrator t=2025-06-13T14:57:08.807177755Z level=info msg="Migration successfully executed" id="add cluster_slug column" duration=6.749904ms grafana | logger=migrator t=2025-06-13T14:57:08.830167769Z level=info msg="Executing migration" id="add migration uid column" grafana | logger=migrator t=2025-06-13T14:57:08.837080524Z level=info msg="Migration successfully executed" id="add migration uid column" duration=6.918265ms grafana | logger=migrator t=2025-06-13T14:57:08.841394607Z level=info msg="Executing migration" id="Update uid column values for migration" grafana | logger=migrator t=2025-06-13T14:57:08.841520919Z level=info msg="Migration successfully executed" id="Update uid column values for migration" duration=126.252µs grafana | logger=migrator t=2025-06-13T14:57:08.843810667Z level=info msg="Executing migration" id="Add unique index migration_uid" grafana | logger=migrator t=2025-06-13T14:57:08.844659608Z level=info msg="Migration successfully executed" id="Add unique index migration_uid" duration=848.901µs grafana | logger=migrator t=2025-06-13T14:57:08.847625815Z level=info msg="Executing migration" id="add migration run uid column" grafana | logger=migrator t=2025-06-13T14:57:08.854735192Z level=info msg="Migration successfully executed" id="add migration run uid column" duration=7.108017ms grafana | logger=migrator t=2025-06-13T14:57:08.860302431Z level=info msg="Executing migration" id="Update uid column values for migration run" grafana | logger=migrator t=2025-06-13T14:57:08.860457763Z level=info msg="Migration successfully executed" id="Update uid column values for migration run" duration=155.882µs grafana | logger=migrator t=2025-06-13T14:57:08.866289845Z level=info msg="Executing migration" id="Add unique index migration_run_uid" grafana | logger=migrator t=2025-06-13T14:57:08.86753543Z level=info msg="Migration successfully executed" id="Add unique index migration_run_uid" duration=1.247695ms grafana | logger=migrator t=2025-06-13T14:57:08.87076786Z level=info msg="Executing migration" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-13T14:57:08.895713858Z level=info msg="Migration successfully executed" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" duration=24.945768ms grafana | logger=migrator t=2025-06-13T14:57:08.906401801Z level=info msg="Executing migration" id="create cloud_migration_session v2" grafana | logger=migrator t=2025-06-13T14:57:08.911645465Z level=info msg="Migration successfully executed" id="create cloud_migration_session v2" duration=5.247555ms grafana | logger=migrator t=2025-06-13T14:57:08.914890486Z level=info msg="Executing migration" id="create index UQE_cloud_migration_session_uid - v2" grafana | logger=migrator t=2025-06-13T14:57:08.915921248Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_session_uid - v2" duration=1.030412ms grafana | logger=migrator t=2025-06-13T14:57:08.918857984Z level=info msg="Executing migration" id="copy cloud_migration_session v1 to v2" grafana | logger=migrator t=2025-06-13T14:57:08.919217719Z level=info msg="Migration successfully executed" id="copy cloud_migration_session v1 to v2" duration=359.385µs grafana | logger=migrator t=2025-06-13T14:57:08.924649526Z level=info msg="Executing migration" id="drop cloud_migration_session_tmp_qwerty" grafana | logger=migrator t=2025-06-13T14:57:08.925298244Z level=info msg="Migration successfully executed" id="drop cloud_migration_session_tmp_qwerty" duration=648.278µs grafana | logger=migrator t=2025-06-13T14:57:08.92817161Z level=info msg="Executing migration" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-13T14:57:08.955291265Z level=info msg="Migration successfully executed" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" duration=27.118855ms grafana | logger=migrator t=2025-06-13T14:57:08.962710997Z level=info msg="Executing migration" id="create cloud_migration_snapshot v2" grafana | logger=migrator t=2025-06-13T14:57:08.963738899Z level=info msg="Migration successfully executed" id="create cloud_migration_snapshot v2" duration=1.030032ms grafana | logger=migrator t=2025-06-13T14:57:08.967124041Z level=info msg="Executing migration" id="create index UQE_cloud_migration_snapshot_uid - v2" grafana | logger=migrator t=2025-06-13T14:57:08.968008032Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_snapshot_uid - v2" duration=884.101µs grafana | logger=migrator t=2025-06-13T14:57:08.973457429Z level=info msg="Executing migration" id="copy cloud_migration_snapshot v1 to v2" grafana | logger=migrator t=2025-06-13T14:57:08.973792174Z level=info msg="Migration successfully executed" id="copy cloud_migration_snapshot v1 to v2" duration=334.695µs grafana | logger=migrator t=2025-06-13T14:57:08.977066344Z level=info msg="Executing migration" id="drop cloud_migration_snapshot_tmp_qwerty" grafana | logger=migrator t=2025-06-13T14:57:08.977916514Z level=info msg="Migration successfully executed" id="drop cloud_migration_snapshot_tmp_qwerty" duration=849.8µs grafana | logger=migrator t=2025-06-13T14:57:08.981344207Z level=info msg="Executing migration" id="add snapshot upload_url column" grafana | logger=migrator t=2025-06-13T14:57:08.993966283Z level=info msg="Migration successfully executed" id="add snapshot upload_url column" duration=12.622476ms grafana | logger=migrator t=2025-06-13T14:57:08.99941571Z level=info msg="Executing migration" id="add snapshot status column" grafana | logger=migrator t=2025-06-13T14:57:09.013236269Z level=info msg="Migration successfully executed" id="add snapshot status column" duration=13.823029ms grafana | logger=migrator t=2025-06-13T14:57:09.017837545Z level=info msg="Executing migration" id="add snapshot local_directory column" grafana | logger=migrator t=2025-06-13T14:57:09.02644134Z level=info msg="Migration successfully executed" id="add snapshot local_directory column" duration=8.602935ms grafana | logger=migrator t=2025-06-13T14:57:09.029140883Z level=info msg="Executing migration" id="add snapshot gms_snapshot_uid column" grafana | logger=migrator t=2025-06-13T14:57:09.035820634Z level=info msg="Migration successfully executed" id="add snapshot gms_snapshot_uid column" duration=6.679041ms grafana | logger=migrator t=2025-06-13T14:57:09.039097634Z level=info msg="Executing migration" id="add snapshot encryption_key column" grafana | logger=migrator t=2025-06-13T14:57:09.048349086Z level=info msg="Migration successfully executed" id="add snapshot encryption_key column" duration=9.251552ms grafana | logger=migrator t=2025-06-13T14:57:09.054261628Z level=info msg="Executing migration" id="add snapshot error_string column" grafana | logger=migrator t=2025-06-13T14:57:09.060947529Z level=info msg="Migration successfully executed" id="add snapshot error_string column" duration=6.683331ms grafana | logger=migrator t=2025-06-13T14:57:09.078906827Z level=info msg="Executing migration" id="create cloud_migration_resource table v1" grafana | logger=migrator t=2025-06-13T14:57:09.079625806Z level=info msg="Migration successfully executed" id="create cloud_migration_resource table v1" duration=716.909µs grafana | logger=migrator t=2025-06-13T14:57:09.082700423Z level=info msg="Executing migration" id="delete cloud_migration_snapshot.result column" grafana | logger=migrator t=2025-06-13T14:57:09.11612967Z level=info msg="Migration successfully executed" id="delete cloud_migration_snapshot.result column" duration=33.428457ms grafana | logger=migrator t=2025-06-13T14:57:09.120814947Z level=info msg="Executing migration" id="add cloud_migration_resource.name column" grafana | logger=migrator t=2025-06-13T14:57:09.127594279Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.name column" duration=6.779542ms grafana | logger=migrator t=2025-06-13T14:57:09.134823377Z level=info msg="Executing migration" id="add cloud_migration_resource.parent_name column" grafana | logger=migrator t=2025-06-13T14:57:09.146036183Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.parent_name column" duration=11.203066ms grafana | logger=migrator t=2025-06-13T14:57:09.151115385Z level=info msg="Executing migration" id="add cloud_migration_session.org_id column" grafana | logger=migrator t=2025-06-13T14:57:09.157843397Z level=info msg="Migration successfully executed" id="add cloud_migration_session.org_id column" duration=6.730282ms grafana | logger=migrator t=2025-06-13T14:57:09.164777751Z level=info msg="Executing migration" id="add cloud_migration_resource.error_code column" grafana | logger=migrator t=2025-06-13T14:57:09.171666325Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.error_code column" duration=6.890394ms grafana | logger=migrator t=2025-06-13T14:57:09.174619301Z level=info msg="Executing migration" id="increase resource_uid column length" grafana | logger=migrator t=2025-06-13T14:57:09.174636991Z level=info msg="Migration successfully executed" id="increase resource_uid column length" duration=18.63µs grafana | logger=migrator t=2025-06-13T14:57:09.178307875Z level=info msg="Executing migration" id="alter kv_store.value to longtext" grafana | logger=migrator t=2025-06-13T14:57:09.178324686Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=17.711µs grafana | logger=migrator t=2025-06-13T14:57:09.182322894Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" grafana | logger=migrator t=2025-06-13T14:57:09.189848985Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=7.528191ms grafana | logger=migrator t=2025-06-13T14:57:09.198829484Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" grafana | logger=migrator t=2025-06-13T14:57:09.208919437Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=10.089643ms grafana | logger=migrator t=2025-06-13T14:57:09.211860713Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" grafana | logger=migrator t=2025-06-13T14:57:09.212153186Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=292.033µs grafana | logger=migrator t=2025-06-13T14:57:09.216116635Z level=info msg="Executing migration" id="managed folder permissions alerting silences actions migration" grafana | logger=migrator t=2025-06-13T14:57:09.216439629Z level=info msg="Migration successfully executed" id="managed folder permissions alerting silences actions migration" duration=322.164µs grafana | logger=migrator t=2025-06-13T14:57:09.223890219Z level=info msg="Executing migration" id="add record column to alert_rule table" grafana | logger=migrator t=2025-06-13T14:57:09.233951912Z level=info msg="Migration successfully executed" id="add record column to alert_rule table" duration=10.060623ms grafana | logger=migrator t=2025-06-13T14:57:09.23878786Z level=info msg="Executing migration" id="add record column to alert_rule_version table" grafana | logger=migrator t=2025-06-13T14:57:09.246088809Z level=info msg="Migration successfully executed" id="add record column to alert_rule_version table" duration=7.300519ms grafana | logger=migrator t=2025-06-13T14:57:09.250863117Z level=info msg="Executing migration" id="add resolved_at column to alert_instance table" grafana | logger=migrator t=2025-06-13T14:57:09.258446509Z level=info msg="Migration successfully executed" id="add resolved_at column to alert_instance table" duration=7.585352ms grafana | logger=migrator t=2025-06-13T14:57:09.263000144Z level=info msg="Executing migration" id="add last_sent_at column to alert_instance table" grafana | logger=migrator t=2025-06-13T14:57:09.269736356Z level=info msg="Migration successfully executed" id="add last_sent_at column to alert_instance table" duration=6.736072ms grafana | logger=migrator t=2025-06-13T14:57:09.277842424Z level=info msg="Executing migration" id="Add scope to alert.notifications.receivers:read and alert.notifications.receivers.secrets:read" grafana | logger=migrator t=2025-06-13T14:57:09.278421841Z level=info msg="Migration successfully executed" id="Add scope to alert.notifications.receivers:read and alert.notifications.receivers.secrets:read" duration=579.577µs grafana | logger=migrator t=2025-06-13T14:57:09.282657702Z level=info msg="Executing migration" id="add metadata column to alert_rule table" grafana | logger=migrator t=2025-06-13T14:57:09.289667888Z level=info msg="Migration successfully executed" id="add metadata column to alert_rule table" duration=7.012936ms grafana | logger=migrator t=2025-06-13T14:57:09.297947478Z level=info msg="Executing migration" id="add metadata column to alert_rule_version table" grafana | logger=migrator t=2025-06-13T14:57:09.309722951Z level=info msg="Migration successfully executed" id="add metadata column to alert_rule_version table" duration=11.776793ms grafana | logger=migrator t=2025-06-13T14:57:09.329751884Z level=info msg="Executing migration" id="delete orphaned service account permissions" grafana | logger=migrator t=2025-06-13T14:57:09.332091533Z level=info msg="Migration successfully executed" id="delete orphaned service account permissions" duration=2.341099ms grafana | logger=migrator t=2025-06-13T14:57:09.337059393Z level=info msg="Executing migration" id="adding action set permissions" grafana | logger=migrator t=2025-06-13T14:57:09.337733661Z level=info msg="Migration successfully executed" id="adding action set permissions" duration=674.938µs grafana | logger=migrator t=2025-06-13T14:57:09.342936475Z level=info msg="Executing migration" id="create user_external_session table" grafana | logger=migrator t=2025-06-13T14:57:09.343869536Z level=info msg="Migration successfully executed" id="create user_external_session table" duration=933.441µs grafana | logger=migrator t=2025-06-13T14:57:09.348718285Z level=info msg="Executing migration" id="increase name_id column length to 1024" grafana | logger=migrator t=2025-06-13T14:57:09.348744175Z level=info msg="Migration successfully executed" id="increase name_id column length to 1024" duration=26.88µs grafana | logger=migrator t=2025-06-13T14:57:09.351935834Z level=info msg="Executing migration" id="increase session_id column length to 1024" grafana | logger=migrator t=2025-06-13T14:57:09.351949984Z level=info msg="Migration successfully executed" id="increase session_id column length to 1024" duration=14.92µs grafana | logger=migrator t=2025-06-13T14:57:09.356157125Z level=info msg="Executing migration" id="remove scope from alert.notifications.receivers:create" grafana | logger=migrator t=2025-06-13T14:57:09.356656141Z level=info msg="Migration successfully executed" id="remove scope from alert.notifications.receivers:create" duration=498.966µs grafana | logger=migrator t=2025-06-13T14:57:09.361085455Z level=info msg="Executing migration" id="add created_by column to alert_rule_version table" grafana | logger=migrator t=2025-06-13T14:57:09.369443377Z level=info msg="Migration successfully executed" id="add created_by column to alert_rule_version table" duration=8.357472ms grafana | logger=migrator t=2025-06-13T14:57:09.372826938Z level=info msg="Executing migration" id="add updated_by column to alert_rule table" grafana | logger=migrator t=2025-06-13T14:57:09.383277975Z level=info msg="Migration successfully executed" id="add updated_by column to alert_rule table" duration=10.450757ms grafana | logger=migrator t=2025-06-13T14:57:09.388808852Z level=info msg="Executing migration" id="add alert_rule_state table" grafana | logger=migrator t=2025-06-13T14:57:09.389705083Z level=info msg="Migration successfully executed" id="add alert_rule_state table" duration=893.081µs grafana | logger=migrator t=2025-06-13T14:57:09.395730286Z level=info msg="Executing migration" id="add index to alert_rule_state on org_id and rule_uid columns" grafana | logger=migrator t=2025-06-13T14:57:09.397025712Z level=info msg="Migration successfully executed" id="add index to alert_rule_state on org_id and rule_uid columns" duration=1.294836ms grafana | logger=migrator t=2025-06-13T14:57:09.401173702Z level=info msg="Executing migration" id="add guid column to alert_rule table" grafana | logger=migrator t=2025-06-13T14:57:09.411086163Z level=info msg="Migration successfully executed" id="add guid column to alert_rule table" duration=9.912111ms grafana | logger=migrator t=2025-06-13T14:57:09.41419994Z level=info msg="Executing migration" id="add rule_guid column to alert_rule_version table" grafana | logger=migrator t=2025-06-13T14:57:09.421209856Z level=info msg="Migration successfully executed" id="add rule_guid column to alert_rule_version table" duration=7.007656ms grafana | logger=migrator t=2025-06-13T14:57:09.426012274Z level=info msg="Executing migration" id="cleanup alert_rule_version table" grafana | logger=migrator t=2025-06-13T14:57:09.426032324Z level=info msg="Rule version record limit is not set, fallback to 100" limit=0 grafana | logger=migrator t=2025-06-13T14:57:09.426207066Z level=info msg="Cleaning up table `alert_rule_version`" batchSize=50 batches=0 keepVersions=100 grafana | logger=migrator t=2025-06-13T14:57:09.426216976Z level=info msg="Migration successfully executed" id="cleanup alert_rule_version table" duration=204.962µs grafana | logger=migrator t=2025-06-13T14:57:09.429475116Z level=info msg="Executing migration" id="populate rule guid in alert rule table" grafana | logger=migrator t=2025-06-13T14:57:09.429943572Z level=info msg="Migration successfully executed" id="populate rule guid in alert rule table" duration=468.106µs grafana | logger=migrator t=2025-06-13T14:57:09.434222914Z level=info msg="Executing migration" id="drop index in alert_rule_version table on rule_org_id, rule_uid and version columns" grafana | logger=migrator t=2025-06-13T14:57:09.43557692Z level=info msg="Migration successfully executed" id="drop index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.353776ms grafana | logger=migrator t=2025-06-13T14:57:09.449473759Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid, rule_guid and version columns" grafana | logger=migrator t=2025-06-13T14:57:09.450770415Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid, rule_guid and version columns" duration=1.296556ms grafana | logger=migrator t=2025-06-13T14:57:09.456067059Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_guid and version columns" grafana | logger=migrator t=2025-06-13T14:57:09.457377715Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_guid and version columns" duration=1.295226ms grafana | logger=migrator t=2025-06-13T14:57:09.461605897Z level=info msg="Executing migration" id="add index in alert_rule table on guid columns" grafana | logger=migrator t=2025-06-13T14:57:09.463094715Z level=info msg="Migration successfully executed" id="add index in alert_rule table on guid columns" duration=1.489018ms grafana | logger=migrator t=2025-06-13T14:57:09.467708271Z level=info msg="Executing migration" id="add keep_firing_for column to alert_rule" grafana | logger=migrator t=2025-06-13T14:57:09.477483429Z level=info msg="Migration successfully executed" id="add keep_firing_for column to alert_rule" duration=9.775138ms grafana | logger=migrator t=2025-06-13T14:57:09.481954634Z level=info msg="Executing migration" id="add keep_firing_for column to alert_rule_version" grafana | logger=migrator t=2025-06-13T14:57:09.496499011Z level=info msg="Migration successfully executed" id="add keep_firing_for column to alert_rule_version" duration=14.517596ms grafana | logger=migrator t=2025-06-13T14:57:09.500431908Z level=info msg="Executing migration" id="add missing_series_evals_to_resolve column to alert_rule" grafana | logger=migrator t=2025-06-13T14:57:09.508044451Z level=info msg="Migration successfully executed" id="add missing_series_evals_to_resolve column to alert_rule" duration=7.614713ms grafana | logger=migrator t=2025-06-13T14:57:09.514640461Z level=info msg="Executing migration" id="add missing_series_evals_to_resolve column to alert_rule_version" grafana | logger=migrator t=2025-06-13T14:57:09.52197108Z level=info msg="Migration successfully executed" id="add missing_series_evals_to_resolve column to alert_rule_version" duration=7.332769ms grafana | logger=migrator t=2025-06-13T14:57:09.525193459Z level=info msg="Executing migration" id="remove the datasources:drilldown action" grafana | logger=migrator t=2025-06-13T14:57:09.525359051Z level=info msg="Removed 0 datasources:drilldown permissions" grafana | logger=migrator t=2025-06-13T14:57:09.525370172Z level=info msg="Migration successfully executed" id="remove the datasources:drilldown action" duration=177.323µs grafana | logger=migrator t=2025-06-13T14:57:09.530450163Z level=info msg="Executing migration" id="remove title in folder unique index" grafana | logger=migrator t=2025-06-13T14:57:09.531417425Z level=info msg="Migration successfully executed" id="remove title in folder unique index" duration=966.812µs grafana | logger=migrator t=2025-06-13T14:57:09.536739179Z level=info msg="migrations completed" performed=654 skipped=0 duration=5.64273437s grafana | logger=migrator t=2025-06-13T14:57:09.537163604Z level=info msg="Unlocking database" grafana | logger=sqlstore t=2025-06-13T14:57:09.553462423Z level=info msg="Created default admin" user=admin grafana | logger=sqlstore t=2025-06-13T14:57:09.553855187Z level=info msg="Created default organization" grafana | logger=secrets t=2025-06-13T14:57:09.579021153Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 grafana | logger=plugin.angulardetectorsprovider.dynamic t=2025-06-13T14:57:09.670498735Z level=info msg="Restored cache from database" duration=452.506µs grafana | logger=resource-migrator t=2025-06-13T14:57:09.679335672Z level=info msg="Locking database" grafana | logger=resource-migrator t=2025-06-13T14:57:09.679353312Z level=info msg="Starting DB migrations" grafana | logger=resource-migrator t=2025-06-13T14:57:09.686655411Z level=info msg="Executing migration" id="create resource_migration_log table" grafana | logger=resource-migrator t=2025-06-13T14:57:09.68740217Z level=info msg="Migration successfully executed" id="create resource_migration_log table" duration=746.289µs grafana | logger=resource-migrator t=2025-06-13T14:57:09.694993153Z level=info msg="Executing migration" id="Initialize resource tables" grafana | logger=resource-migrator t=2025-06-13T14:57:09.695036083Z level=info msg="Migration successfully executed" id="Initialize resource tables" duration=43.65µs grafana | logger=resource-migrator t=2025-06-13T14:57:09.700385158Z level=info msg="Executing migration" id="drop table resource" grafana | logger=resource-migrator t=2025-06-13T14:57:09.70052068Z level=info msg="Migration successfully executed" id="drop table resource" duration=136.622µs grafana | logger=resource-migrator t=2025-06-13T14:57:09.705270907Z level=info msg="Executing migration" id="create table resource" grafana | logger=resource-migrator t=2025-06-13T14:57:09.70712271Z level=info msg="Migration successfully executed" id="create table resource" duration=1.854113ms grafana | logger=resource-migrator t=2025-06-13T14:57:09.712425824Z level=info msg="Executing migration" id="create table resource, index: 0" grafana | logger=resource-migrator t=2025-06-13T14:57:09.713353945Z level=info msg="Migration successfully executed" id="create table resource, index: 0" duration=927.521µs grafana | logger=resource-migrator t=2025-06-13T14:57:09.716265001Z level=info msg="Executing migration" id="drop table resource_history" grafana | logger=resource-migrator t=2025-06-13T14:57:09.716347302Z level=info msg="Migration successfully executed" id="drop table resource_history" duration=81.911µs grafana | logger=resource-migrator t=2025-06-13T14:57:09.719436129Z level=info msg="Executing migration" id="create table resource_history" grafana | logger=resource-migrator t=2025-06-13T14:57:09.720611364Z level=info msg="Migration successfully executed" id="create table resource_history" duration=1.174755ms grafana | logger=resource-migrator t=2025-06-13T14:57:09.729169348Z level=info msg="Executing migration" id="create table resource_history, index: 0" grafana | logger=resource-migrator t=2025-06-13T14:57:09.731552247Z level=info msg="Migration successfully executed" id="create table resource_history, index: 0" duration=2.379289ms grafana | logger=resource-migrator t=2025-06-13T14:57:09.735690437Z level=info msg="Executing migration" id="create table resource_history, index: 1" grafana | logger=resource-migrator t=2025-06-13T14:57:09.737010393Z level=info msg="Migration successfully executed" id="create table resource_history, index: 1" duration=1.319806ms grafana | logger=resource-migrator t=2025-06-13T14:57:09.742611461Z level=info msg="Executing migration" id="drop table resource_version" grafana | logger=resource-migrator t=2025-06-13T14:57:09.742724223Z level=info msg="Migration successfully executed" id="drop table resource_version" duration=112.601µs grafana | logger=resource-migrator t=2025-06-13T14:57:09.746162044Z level=info msg="Executing migration" id="create table resource_version" grafana | logger=resource-migrator t=2025-06-13T14:57:09.746973394Z level=info msg="Migration successfully executed" id="create table resource_version" duration=812.33µs grafana | logger=resource-migrator t=2025-06-13T14:57:09.752731674Z level=info msg="Executing migration" id="create table resource_version, index: 0" grafana | logger=resource-migrator t=2025-06-13T14:57:09.753617955Z level=info msg="Migration successfully executed" id="create table resource_version, index: 0" duration=890.081µs grafana | logger=resource-migrator t=2025-06-13T14:57:09.756812284Z level=info msg="Executing migration" id="drop table resource_blob" grafana | logger=resource-migrator t=2025-06-13T14:57:09.756894785Z level=info msg="Migration successfully executed" id="drop table resource_blob" duration=82.581µs grafana | logger=resource-migrator t=2025-06-13T14:57:09.759174413Z level=info msg="Executing migration" id="create table resource_blob" grafana | logger=resource-migrator t=2025-06-13T14:57:09.760011883Z level=info msg="Migration successfully executed" id="create table resource_blob" duration=837.49µs grafana | logger=resource-migrator t=2025-06-13T14:57:09.765112454Z level=info msg="Executing migration" id="create table resource_blob, index: 0" grafana | logger=resource-migrator t=2025-06-13T14:57:09.766031705Z level=info msg="Migration successfully executed" id="create table resource_blob, index: 0" duration=919.051µs grafana | logger=resource-migrator t=2025-06-13T14:57:09.77049082Z level=info msg="Executing migration" id="create table resource_blob, index: 1" grafana | logger=resource-migrator t=2025-06-13T14:57:09.771446151Z level=info msg="Migration successfully executed" id="create table resource_blob, index: 1" duration=955.551µs grafana | logger=resource-migrator t=2025-06-13T14:57:09.774550829Z level=info msg="Executing migration" id="Add column previous_resource_version in resource_history" grafana | logger=resource-migrator t=2025-06-13T14:57:09.7820476Z level=info msg="Migration successfully executed" id="Add column previous_resource_version in resource_history" duration=7.482161ms grafana | logger=resource-migrator t=2025-06-13T14:57:09.789052995Z level=info msg="Executing migration" id="Add column previous_resource_version in resource" grafana | logger=resource-migrator t=2025-06-13T14:57:09.7968672Z level=info msg="Migration successfully executed" id="Add column previous_resource_version in resource" duration=7.813875ms grafana | logger=resource-migrator t=2025-06-13T14:57:09.800675227Z level=info msg="Executing migration" id="Add index to resource_history for polling" grafana | logger=resource-migrator t=2025-06-13T14:57:09.801552947Z level=info msg="Migration successfully executed" id="Add index to resource_history for polling" duration=877.59µs grafana | logger=resource-migrator t=2025-06-13T14:57:09.813132308Z level=info msg="Executing migration" id="Add index to resource for loading" grafana | logger=resource-migrator t=2025-06-13T14:57:09.814009079Z level=info msg="Migration successfully executed" id="Add index to resource for loading" duration=876.861µs grafana | logger=resource-migrator t=2025-06-13T14:57:09.822191708Z level=info msg="Executing migration" id="Add column folder in resource_history" grafana | logger=resource-migrator t=2025-06-13T14:57:09.835141445Z level=info msg="Migration successfully executed" id="Add column folder in resource_history" duration=12.950007ms grafana | logger=resource-migrator t=2025-06-13T14:57:09.840699673Z level=info msg="Executing migration" id="Add column folder in resource" grafana | logger=resource-migrator t=2025-06-13T14:57:09.848916703Z level=info msg="Migration successfully executed" id="Add column folder in resource" duration=8.21862ms grafana | logger=resource-migrator t=2025-06-13T14:57:09.852494556Z level=info msg="Executing migration" id="Migrate DeletionMarkers to real Resource objects" grafana | logger=deletion-marker-migrator t=2025-06-13T14:57:09.852544327Z level=info msg="finding any deletion markers" grafana | logger=resource-migrator t=2025-06-13T14:57:09.853328946Z level=info msg="Migration successfully executed" id="Migrate DeletionMarkers to real Resource objects" duration=836.38µs grafana | logger=resource-migrator t=2025-06-13T14:57:09.857952133Z level=info msg="Executing migration" id="Add index to resource_history for get trash" grafana | logger=resource-migrator t=2025-06-13T14:57:09.85935513Z level=info msg="Migration successfully executed" id="Add index to resource_history for get trash" duration=1.403227ms grafana | logger=resource-migrator t=2025-06-13T14:57:09.86353102Z level=info msg="Executing migration" id="Add generation to resource history" grafana | logger=resource-migrator t=2025-06-13T14:57:09.874816988Z level=info msg="Migration successfully executed" id="Add generation to resource history" duration=11.286498ms grafana | logger=resource-migrator t=2025-06-13T14:57:09.880175452Z level=info msg="Executing migration" id="Add generation index to resource history" grafana | logger=resource-migrator t=2025-06-13T14:57:09.881075793Z level=info msg="Migration successfully executed" id="Add generation index to resource history" duration=896.171µs grafana | logger=resource-migrator t=2025-06-13T14:57:09.884181341Z level=info msg="migrations completed" performed=26 skipped=0 duration=197.56524ms grafana | logger=resource-migrator t=2025-06-13T14:57:09.885121993Z level=info msg="Unlocking database" grafana | t=2025-06-13T14:57:09.885446566Z level=info caller=logger.go:214 time=2025-06-13T14:57:09.885419666Z msg="Using channel notifier" logger=sql-resource-server grafana | logger=plugin.store t=2025-06-13T14:57:09.899326105Z level=info msg="Loading plugins..." grafana | logger=plugins.registration t=2025-06-13T14:57:09.943154538Z level=error msg="Could not register plugin" pluginId=table error="plugin table is already registered" grafana | logger=plugins.initialization t=2025-06-13T14:57:09.943186388Z level=error msg="Could not initialize plugin" pluginId=table error="plugin table is already registered" grafana | logger=plugin.store t=2025-06-13T14:57:09.94330803Z level=info msg="Plugins loaded" count=53 duration=43.983005ms grafana | logger=query_data t=2025-06-13T14:57:09.956984106Z level=info msg="Query Service initialization" grafana | logger=live.push_http t=2025-06-13T14:57:09.961532641Z level=info msg="Live Push Gateway initialization" grafana | logger=ngalert.notifier.alertmanager org=1 t=2025-06-13T14:57:09.988093424Z level=info msg="Applying new configuration to Alertmanager" configHash=d2c56faca6af2a5772ff4253222f7386 grafana | logger=ngalert t=2025-06-13T14:57:09.99600261Z level=info msg="Using simple database alert instance store" grafana | logger=ngalert.state.manager.persist t=2025-06-13T14:57:09.99602551Z level=info msg="Using sync state persister" grafana | logger=infra.usagestats.collector t=2025-06-13T14:57:09.9984479Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 grafana | logger=grafanaStorageLogger t=2025-06-13T14:57:09.999209199Z level=info msg="Storage starting" grafana | logger=plugin.backgroundinstaller t=2025-06-13T14:57:09.999640794Z level=info msg="Installing plugin" pluginId=grafana-pyroscope-app version= grafana | logger=ngalert.multiorg.alertmanager t=2025-06-13T14:57:10.001404415Z level=info msg="Starting MultiOrg Alertmanager" grafana | logger=ngalert.state.manager t=2025-06-13T14:57:10.002032013Z level=info msg="Warming state cache for startup" grafana | logger=sqlstore.transactions t=2025-06-13T14:57:10.015670119Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 grafana | logger=http.server t=2025-06-13T14:57:10.018674085Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= grafana | logger=provisioning.datasources t=2025-06-13T14:57:10.065741036Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz grafana | logger=sqlstore.transactions t=2025-06-13T14:57:10.077409167Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 grafana | logger=grafana.update.checker t=2025-06-13T14:57:10.098003867Z level=info msg="Update check succeeded" duration=97.094068ms grafana | logger=plugin.angulardetectorsprovider.dynamic t=2025-06-13T14:57:10.123805039Z level=info msg="Patterns update finished" duration=121.005447ms grafana | logger=plugins.update.checker t=2025-06-13T14:57:10.135811245Z level=info msg="Update check succeeded" duration=134.630492ms grafana | logger=provisioning.alerting t=2025-06-13T14:57:10.186421638Z level=info msg="starting to provision alerting" grafana | logger=provisioning.alerting t=2025-06-13T14:57:10.186445679Z level=info msg="finished to provision alerting" grafana | logger=provisioning.dashboard t=2025-06-13T14:57:10.187550022Z level=info msg="starting to provision dashboards" grafana | logger=ngalert.state.manager t=2025-06-13T14:57:10.19065878Z level=info msg="State cache has been initialized" states=0 duration=188.819029ms grafana | logger=ngalert.scheduler t=2025-06-13T14:57:10.190720521Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=3 grafana | logger=ticker t=2025-06-13T14:57:10.190840352Z level=info msg=starting first_tick=2025-06-13T14:57:20Z grafana | logger=plugin.installer t=2025-06-13T14:57:10.367426623Z level=info msg="Installing plugin" pluginId=grafana-pyroscope-app version= grafana | logger=installer.fs t=2025-06-13T14:57:10.417992515Z level=info msg="Downloaded and extracted grafana-pyroscope-app v1.4.1 zip successfully to /var/lib/grafana/plugins/grafana-pyroscope-app" grafana | logger=plugins.registration t=2025-06-13T14:57:10.442811056Z level=info msg="Plugin registered" pluginId=grafana-pyroscope-app grafana | logger=plugin.backgroundinstaller t=2025-06-13T14:57:10.442847666Z level=info msg="Plugin successfully installed" pluginId=grafana-pyroscope-app version= duration=443.166241ms grafana | logger=plugin.backgroundinstaller t=2025-06-13T14:57:10.442873927Z level=info msg="Installing plugin" pluginId=grafana-exploretraces-app version= grafana | logger=grafana-apiserver t=2025-06-13T14:57:10.588163158Z level=info msg="Adding GroupVersion iam.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-13T14:57:10.590252023Z level=info msg="Adding GroupVersion notifications.alerting.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-13T14:57:10.590964772Z level=info msg="Adding GroupVersion userstorage.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-13T14:57:10.595953182Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-13T14:57:10.600456447Z level=info msg="Adding GroupVersion dashboard.grafana.app v1beta1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-13T14:57:10.60149731Z level=info msg="Adding GroupVersion dashboard.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-13T14:57:10.602449971Z level=info msg="Adding GroupVersion dashboard.grafana.app v2alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-13T14:57:10.603231991Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-13T14:57:10.604513626Z level=info msg="Adding GroupVersion folder.grafana.app v1beta1 to ResourceManager" grafana | logger=plugin.installer t=2025-06-13T14:57:10.639079275Z level=info msg="Installing plugin" pluginId=grafana-exploretraces-app version= grafana | logger=app-registry t=2025-06-13T14:57:10.688892739Z level=info msg="app registry initialized" grafana | logger=installer.fs t=2025-06-13T14:57:10.732621469Z level=info msg="Downloaded and extracted grafana-exploretraces-app v1.0.0 zip successfully to /var/lib/grafana/plugins/grafana-exploretraces-app" grafana | logger=plugins.registration t=2025-06-13T14:57:10.750349064Z level=info msg="Plugin registered" pluginId=grafana-exploretraces-app grafana | logger=plugin.backgroundinstaller t=2025-06-13T14:57:10.750381334Z level=info msg="Plugin successfully installed" pluginId=grafana-exploretraces-app version= duration=307.500847ms grafana | logger=plugin.backgroundinstaller t=2025-06-13T14:57:10.750405625Z level=info msg="Installing plugin" pluginId=grafana-metricsdrilldown-app version= grafana | logger=plugin.installer t=2025-06-13T14:57:10.92990352Z level=info msg="Installing plugin" pluginId=grafana-metricsdrilldown-app version= grafana | logger=installer.fs t=2025-06-13T14:57:11.010104813Z level=info msg="Downloaded and extracted grafana-metricsdrilldown-app v1.0.1 zip successfully to /var/lib/grafana/plugins/grafana-metricsdrilldown-app" grafana | logger=plugins.registration t=2025-06-13T14:57:11.034004331Z level=info msg="Plugin registered" pluginId=grafana-metricsdrilldown-app grafana | logger=plugin.backgroundinstaller t=2025-06-13T14:57:11.034042442Z level=info msg="Plugin successfully installed" pluginId=grafana-metricsdrilldown-app version= duration=283.619037ms grafana | logger=plugin.backgroundinstaller t=2025-06-13T14:57:11.034101832Z level=info msg="Installing plugin" pluginId=grafana-lokiexplore-app version= grafana | logger=provisioning.dashboard t=2025-06-13T14:57:11.142457544Z level=info msg="finished to provision dashboards" grafana | logger=plugin.installer t=2025-06-13T14:57:11.321354736Z level=info msg="Installing plugin" pluginId=grafana-lokiexplore-app version= grafana | logger=installer.fs t=2025-06-13T14:57:11.449707207Z level=info msg="Downloaded and extracted grafana-lokiexplore-app v1.0.17 zip successfully to /var/lib/grafana/plugins/grafana-lokiexplore-app" grafana | logger=plugins.registration t=2025-06-13T14:57:11.479687121Z level=info msg="Plugin registered" pluginId=grafana-lokiexplore-app grafana | logger=plugin.backgroundinstaller t=2025-06-13T14:57:11.479730762Z level=info msg="Plugin successfully installed" pluginId=grafana-lokiexplore-app version= duration=445.62135ms grafana | logger=infra.usagestats t=2025-06-13T14:58:33.014058159Z level=info msg="Usage stats are ready to report" kafka | ===> User kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) kafka | ===> Configuring ... kafka | Running in Zookeeper mode... kafka | ===> Running preflight checks ... kafka | ===> Check if /var/lib/kafka/data is writable ... kafka | ===> Check if Zookeeper is healthy ... kafka | [2025-06-13 14:57:08,218] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:57:08,219] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:57:08,219] INFO Client environment:java.version=11.0.26 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:57:08,219] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:57:08,219] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:57:08,219] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/kafka-storage-api-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/kafka-server-common-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/kafka-clients-7.4.9-ccs.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/kafka-storage-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/kafka-raft-7.4.9-ccs.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.4.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.2-1.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.4.9-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.4.9.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/kafka-metadata-7.4.9-ccs.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/commons-io-2.16.0.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/zookeeper-3.8.4.jar:/usr/share/java/cp-base-new/kafka_2.13-7.4.9-ccs.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/utility-belt-7.4.9-53.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.4.9.jar:/usr/share/java/cp-base-new/jose4j-0.9.5.jar (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:57:08,219] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:57:08,219] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:57:08,219] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:57:08,220] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:57:08,220] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:57:08,220] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:57:08,220] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:57:08,220] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:57:08,220] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:57:08,220] INFO Client environment:os.memory.free=494MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:57:08,220] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:57:08,220] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:57:08,223] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@221af3c0 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:57:08,226] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2025-06-13 14:57:08,230] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2025-06-13 14:57:08,237] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-13 14:57:08,266] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-13 14:57:08,271] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-13 14:57:08,290] INFO Socket connection established, initiating session, client: /172.17.0.7:51786, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-13 14:57:08,320] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x100000265870000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-13 14:57:08,453] INFO Session: 0x100000265870000 closed (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:57:08,453] INFO EventThread shut down for session: 0x100000265870000 (org.apache.zookeeper.ClientCnxn) kafka | Using log4j config /etc/kafka/log4j.properties kafka | ===> Launching ... kafka | ===> Launching kafka ... kafka | [2025-06-13 14:57:09,271] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) kafka | [2025-06-13 14:57:09,595] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2025-06-13 14:57:09,675] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) kafka | [2025-06-13 14:57:09,676] INFO starting (kafka.server.KafkaServer) kafka | [2025-06-13 14:57:09,676] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) kafka | [2025-06-13 14:57:09,690] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) kafka | [2025-06-13 14:57:09,694] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:57:09,694] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:57:09,694] INFO Client environment:java.version=11.0.26 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:57:09,694] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:57:09,694] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:57:09,694] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/kafka-storage-api-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/netty-common-4.1.115.Final.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-library-2.13.10.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/kafka-shell-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/reflections-0.9.12.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/kafka-clients-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-storage-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-raft-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-streams-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.0.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.2-1.jar:/usr/bin/../share/java/kafka/connect-runtime-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.115.Final.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/connect-api-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.115.Final.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.115.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.1.2.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.115.Final.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.115.Final.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/trogdor-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/commons-io-2.16.0.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-transforms-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/connect-mirror-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/kafka-tools-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.10.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.115.Final.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.4.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.115.Final.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.115.Final.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.57.v20241219.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:57:09,694] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:57:09,694] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:57:09,694] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:57:09,694] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:57:09,694] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:57:09,694] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:57:09,694] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:57:09,694] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:57:09,694] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:57:09,694] INFO Client environment:os.memory.free=1009MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:57:09,694] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:57:09,695] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:57:09,697] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@584f54e6 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:57:09,701] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2025-06-13 14:57:09,707] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-13 14:57:09,708] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2025-06-13 14:57:09,711] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-13 14:57:09,716] INFO Socket connection established, initiating session, client: /172.17.0.7:51788, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-13 14:57:09,730] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x100000265870001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-13 14:57:09,735] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2025-06-13 14:57:10,053] INFO Cluster ID = avZVZcYzSMyVRlkHApEtCg (kafka.server.KafkaServer) kafka | [2025-06-13 14:57:10,055] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) kafka | [2025-06-13 14:57:10,100] INFO KafkaConfig values: kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 kafka | alter.config.policy.class.name = null kafka | alter.log.dirs.replication.quota.window.num = 11 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 kafka | authorizer.class.name = kafka | auto.create.topics.enable = true kafka | auto.include.jmx.reporter = true kafka | auto.leader.rebalance.enable = true kafka | background.threads = 10 kafka | broker.heartbeat.interval.ms = 2000 kafka | broker.id = 1 kafka | broker.id.generation.enable = true kafka | broker.rack = null kafka | broker.session.timeout.ms = 9000 kafka | client.quota.callback.class = null kafka | compression.type = producer kafka | connection.failed.authentication.delay.ms = 100 kafka | connections.max.idle.ms = 600000 kafka | connections.max.reauth.ms = 0 kafka | control.plane.listener.name = null kafka | controlled.shutdown.enable = true kafka | controlled.shutdown.max.retries = 3 kafka | controlled.shutdown.retry.backoff.ms = 5000 kafka | controller.listener.names = null kafka | controller.quorum.append.linger.ms = 25 kafka | controller.quorum.election.backoff.max.ms = 1000 kafka | controller.quorum.election.timeout.ms = 1000 kafka | controller.quorum.fetch.timeout.ms = 2000 kafka | controller.quorum.request.timeout.ms = 2000 kafka | controller.quorum.retry.backoff.ms = 20 kafka | controller.quorum.voters = [] kafka | controller.quota.window.num = 11 kafka | controller.quota.window.size.seconds = 1 kafka | controller.socket.timeout.ms = 30000 kafka | create.topic.policy.class.name = null kafka | default.replication.factor = 1 kafka | delegation.token.expiry.check.interval.ms = 3600000 kafka | delegation.token.expiry.time.ms = 86400000 kafka | delegation.token.master.key = null kafka | delegation.token.max.lifetime.ms = 604800000 kafka | delegation.token.secret.key = null kafka | delete.records.purgatory.purge.interval.requests = 1 kafka | delete.topic.enable = true kafka | early.start.listeners = null kafka | fetch.max.bytes = 57671680 kafka | fetch.purgatory.purge.interval.requests = 1000 kafka | group.initial.rebalance.delay.ms = 3000 kafka | group.max.session.timeout.ms = 1800000 kafka | group.max.size = 2147483647 kafka | group.min.session.timeout.ms = 6000 kafka | initial.broker.registration.timeout.ms = 60000 kafka | inter.broker.listener.name = PLAINTEXT kafka | inter.broker.protocol.version = 3.4-IV0 kafka | kafka.metrics.polling.interval.secs = 10 kafka | kafka.metrics.reporters = [] kafka | leader.imbalance.check.interval.seconds = 300 kafka | leader.imbalance.per.broker.percentage = 10 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 kafka | log.cleaner.backoff.ms = 15000 kafka | log.cleaner.dedupe.buffer.size = 134217728 kafka | log.cleaner.delete.retention.ms = 86400000 kafka | log.cleaner.enable = true kafka | log.cleaner.io.buffer.load.factor = 0.9 kafka | log.cleaner.io.buffer.size = 524288 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 kafka | log.cleaner.min.cleanable.ratio = 0.5 kafka | log.cleaner.min.compaction.lag.ms = 0 kafka | log.cleaner.threads = 1 kafka | log.cleanup.policy = [delete] kafka | log.dir = /tmp/kafka-logs kafka | log.dirs = /var/lib/kafka/data kafka | log.flush.interval.messages = 9223372036854775807 kafka | log.flush.interval.ms = null kafka | log.flush.offset.checkpoint.interval.ms = 60000 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 kafka | log.index.interval.bytes = 4096 kafka | log.index.size.max.bytes = 10485760 kafka | log.message.downconversion.enable = true kafka | log.message.format.version = 3.0-IV1 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 kafka | log.message.timestamp.type = CreateTime kafka | log.preallocate = false kafka | log.retention.bytes = -1 kafka | log.retention.check.interval.ms = 300000 kafka | log.retention.hours = 168 kafka | log.retention.minutes = null kafka | log.retention.ms = null kafka | log.roll.hours = 168 kafka | log.roll.jitter.hours = 0 kafka | log.roll.jitter.ms = null kafka | log.roll.ms = null kafka | log.segment.bytes = 1073741824 kafka | log.segment.delete.delay.ms = 60000 kafka | max.connection.creation.rate = 2147483647 kafka | max.connections = 2147483647 kafka | max.connections.per.ip = 2147483647 kafka | max.connections.per.ip.overrides = kafka | max.incremental.fetch.session.cache.slots = 1000 kafka | message.max.bytes = 1048588 kafka | metadata.log.dir = null kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 kafka | metadata.log.max.snapshot.interval.ms = 3600000 kafka | metadata.log.segment.bytes = 1073741824 kafka | metadata.log.segment.min.bytes = 8388608 kafka | metadata.log.segment.ms = 604800000 kafka | metadata.max.idle.interval.ms = 500 kafka | metadata.max.retention.bytes = 104857600 kafka | metadata.max.retention.ms = 604800000 kafka | metric.reporters = [] kafka | metrics.num.samples = 2 kafka | metrics.recording.level = INFO kafka | metrics.sample.window.ms = 30000 kafka | min.insync.replicas = 1 kafka | node.id = 1 kafka | num.io.threads = 8 kafka | num.network.threads = 3 kafka | num.partitions = 1 kafka | num.recovery.threads.per.data.dir = 1 kafka | num.replica.alter.log.dirs.threads = null kafka | num.replica.fetchers = 1 kafka | offset.metadata.max.bytes = 4096 kafka | offsets.commit.required.acks = -1 kafka | offsets.commit.timeout.ms = 5000 kafka | offsets.load.buffer.size = 5242880 kafka | offsets.retention.check.interval.ms = 600000 kafka | offsets.retention.minutes = 10080 kafka | offsets.topic.compression.codec = 0 kafka | offsets.topic.num.partitions = 50 kafka | offsets.topic.replication.factor = 1 kafka | offsets.topic.segment.bytes = 104857600 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding kafka | password.encoder.iterations = 4096 kafka | password.encoder.key.length = 128 kafka | password.encoder.keyfactory.algorithm = null kafka | password.encoder.old.secret = null kafka | password.encoder.secret = null kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder kafka | process.roles = [] kafka | producer.id.expiration.check.interval.ms = 600000 kafka | producer.id.expiration.ms = 86400000 kafka | producer.purgatory.purge.interval.requests = 1000 kafka | queued.max.request.bytes = -1 kafka | queued.max.requests = 500 kafka | quota.window.num = 11 kafka | quota.window.size.seconds = 1 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 kafka | remote.log.manager.task.interval.ms = 30000 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 kafka | remote.log.manager.task.retry.backoff.ms = 500 kafka | remote.log.manager.task.retry.jitter = 0.2 kafka | remote.log.manager.thread.pool.size = 10 kafka | remote.log.metadata.manager.class.name = null kafka | remote.log.metadata.manager.class.path = null kafka | remote.log.metadata.manager.impl.prefix = null kafka | remote.log.metadata.manager.listener.name = null kafka | remote.log.reader.max.pending.tasks = 100 kafka | remote.log.reader.threads = 10 kafka | remote.log.storage.manager.class.name = null kafka | remote.log.storage.manager.class.path = null kafka | remote.log.storage.manager.impl.prefix = null kafka | remote.log.storage.system.enable = false kafka | replica.fetch.backoff.ms = 1000 kafka | replica.fetch.max.bytes = 1048576 kafka | replica.fetch.min.bytes = 1 kafka | replica.fetch.response.max.bytes = 10485760 kafka | replica.fetch.wait.max.ms = 500 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 kafka | replica.lag.time.max.ms = 30000 kafka | replica.selector.class = null kafka | replica.socket.receive.buffer.bytes = 65536 kafka | replica.socket.timeout.ms = 30000 kafka | replication.quota.window.num = 11 kafka | replication.quota.window.size.seconds = 1 kafka | request.timeout.ms = 30000 kafka | reserved.broker.max.id = 1000 kafka | sasl.client.callback.handler.class = null kafka | sasl.enabled.mechanisms = [GSSAPI] kafka | sasl.jaas.config = null kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | sasl.kerberos.min.time.before.relogin = 60000 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] kafka | sasl.kerberos.service.name = null kafka | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | sasl.login.callback.handler.class = null kafka | sasl.login.class = null kafka | sasl.login.connect.timeout.ms = null kafka | sasl.login.read.timeout.ms = null kafka | sasl.login.refresh.buffer.seconds = 300 kafka | sasl.login.refresh.min.period.seconds = 60 kafka | sasl.login.refresh.window.factor = 0.8 kafka | sasl.login.refresh.window.jitter = 0.05 kafka | sasl.login.retry.backoff.max.ms = 10000 kafka | sasl.login.retry.backoff.ms = 100 kafka | sasl.mechanism.controller.protocol = GSSAPI kafka | sasl.mechanism.inter.broker.protocol = GSSAPI kafka | sasl.oauthbearer.clock.skew.seconds = 30 kafka | sasl.oauthbearer.expected.audience = null kafka | sasl.oauthbearer.expected.issuer = null kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | sasl.oauthbearer.jwks.endpoint.url = null kafka | sasl.oauthbearer.scope.claim.name = scope kafka | sasl.oauthbearer.sub.claim.name = sub kafka | sasl.oauthbearer.token.endpoint.url = null kafka | sasl.server.callback.handler.class = null kafka | sasl.server.max.receive.size = 524288 kafka | security.inter.broker.protocol = PLAINTEXT kafka | security.providers = null kafka | socket.connection.setup.timeout.max.ms = 30000 kafka | socket.connection.setup.timeout.ms = 10000 kafka | socket.listen.backlog.size = 50 kafka | socket.receive.buffer.bytes = 102400 kafka | socket.request.max.bytes = 104857600 kafka | socket.send.buffer.bytes = 102400 kafka | ssl.cipher.suites = [] kafka | ssl.client.auth = none kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | ssl.endpoint.identification.algorithm = https kafka | ssl.engine.factory.class = null kafka | ssl.key.password = null kafka | ssl.keymanager.algorithm = SunX509 kafka | ssl.keystore.certificate.chain = null kafka | ssl.keystore.key = null kafka | ssl.keystore.location = null kafka | ssl.keystore.password = null kafka | ssl.keystore.type = JKS kafka | ssl.principal.mapping.rules = DEFAULT kafka | ssl.protocol = TLSv1.3 kafka | ssl.provider = null kafka | ssl.secure.random.implementation = null kafka | ssl.trustmanager.algorithm = PKIX kafka | ssl.truststore.certificates = null kafka | ssl.truststore.location = null kafka | ssl.truststore.password = null kafka | ssl.truststore.type = JKS kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 kafka | transaction.max.timeout.ms = 900000 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 kafka | transaction.state.log.load.buffer.size = 5242880 kafka | transaction.state.log.min.isr = 2 kafka | transaction.state.log.num.partitions = 50 kafka | transaction.state.log.replication.factor = 3 kafka | transaction.state.log.segment.bytes = 104857600 kafka | transactional.id.expiration.ms = 604800000 kafka | unclean.leader.election.enable = false kafka | zookeeper.clientCnxnSocket = null kafka | zookeeper.connect = zookeeper:2181 kafka | zookeeper.connection.timeout.ms = null kafka | zookeeper.max.in.flight.requests = 10 kafka | zookeeper.metadata.migration.enable = false kafka | zookeeper.session.timeout.ms = 18000 kafka | zookeeper.set.acl = false kafka | zookeeper.ssl.cipher.suites = null kafka | zookeeper.ssl.client.enable = false kafka | zookeeper.ssl.crl.enable = false kafka | zookeeper.ssl.enabled.protocols = null kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS kafka | zookeeper.ssl.keystore.location = null kafka | zookeeper.ssl.keystore.password = null kafka | zookeeper.ssl.keystore.type = null kafka | zookeeper.ssl.ocsp.enable = false kafka | zookeeper.ssl.protocol = TLSv1.2 kafka | zookeeper.ssl.truststore.location = null kafka | zookeeper.ssl.truststore.password = null kafka | zookeeper.ssl.truststore.type = null kafka | (kafka.server.KafkaConfig) kafka | [2025-06-13 14:57:10,152] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-13 14:57:10,153] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-13 14:57:10,156] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-13 14:57:10,157] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-13 14:57:10,192] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) kafka | [2025-06-13 14:57:10,194] INFO Attempting recovery for all logs in /var/lib/kafka/data since no clean shutdown file was found (kafka.log.LogManager) kafka | [2025-06-13 14:57:10,213] INFO Loaded 0 logs in 20ms. (kafka.log.LogManager) kafka | [2025-06-13 14:57:10,213] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) kafka | [2025-06-13 14:57:10,215] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) kafka | [2025-06-13 14:57:10,232] INFO Starting the log cleaner (kafka.log.LogCleaner) kafka | [2025-06-13 14:57:10,277] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner) kafka | [2025-06-13 14:57:10,291] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) kafka | [2025-06-13 14:57:10,302] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) kafka | [2025-06-13 14:57:10,351] INFO [BrokerToControllerChannelManager broker=1 name=forwarding]: Starting (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-13 14:57:10,751] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2025-06-13 14:57:10,755] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) kafka | [2025-06-13 14:57:10,783] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) kafka | [2025-06-13 14:57:10,783] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2025-06-13 14:57:10,783] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) kafka | [2025-06-13 14:57:10,788] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) kafka | [2025-06-13 14:57:10,792] INFO [BrokerToControllerChannelManager broker=1 name=alterPartition]: Starting (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-13 14:57:10,808] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-13 14:57:10,810] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-13 14:57:10,812] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-13 14:57:10,819] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-13 14:57:10,828] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) kafka | [2025-06-13 14:57:10,848] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) kafka | [2025-06-13 14:57:10,875] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1749826630860,1749826630860,1,0,0,72057604331208705,258,0,27 kafka | (kafka.zk.KafkaZkClient) kafka | [2025-06-13 14:57:10,876] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) kafka | [2025-06-13 14:57:10,938] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) kafka | [2025-06-13 14:57:10,946] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-13 14:57:10,956] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-13 14:57:10,956] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-13 14:57:10,970] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) kafka | [2025-06-13 14:57:10,975] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:10,979] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) kafka | [2025-06-13 14:57:10,981] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:10,983] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) kafka | [2025-06-13 14:57:10,989] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) kafka | [2025-06-13 14:57:10,997] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2025-06-13 14:57:11,008] INFO [Transaction Marker Channel Manager 1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) kafka | [2025-06-13 14:57:11,008] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2025-06-13 14:57:11,022] INFO [MetadataCache brokerId=1] Updated cache from existing to latest FinalizedFeaturesAndEpoch(features=Map(), epoch=0). (kafka.server.metadata.ZkMetadataCache) kafka | [2025-06-13 14:57:11,023] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) kafka | [2025-06-13 14:57:11,037] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) kafka | [2025-06-13 14:57:11,044] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) kafka | [2025-06-13 14:57:11,047] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) kafka | [2025-06-13 14:57:11,050] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-13 14:57:11,075] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) kafka | [2025-06-13 14:57:11,082] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) kafka | [2025-06-13 14:57:11,084] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) kafka | [2025-06-13 14:57:11,090] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) kafka | [2025-06-13 14:57:11,097] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) kafka | [2025-06-13 14:57:11,120] INFO Kafka version: 7.4.9-ccs (org.apache.kafka.common.utils.AppInfoParser) kafka | [2025-06-13 14:57:11,120] INFO Kafka commitId: 07d888cfc0d14765fe5557324f1fdb4ada6698a5 (org.apache.kafka.common.utils.AppInfoParser) kafka | [2025-06-13 14:57:11,120] INFO Kafka startTimeMs: 1749826631113 (org.apache.kafka.common.utils.AppInfoParser) kafka | [2025-06-13 14:57:11,122] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) kafka | [2025-06-13 14:57:11,123] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) kafka | [2025-06-13 14:57:11,124] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) kafka | [2025-06-13 14:57:11,125] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) kafka | [2025-06-13 14:57:11,125] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) kafka | [2025-06-13 14:57:11,125] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) kafka | [2025-06-13 14:57:11,129] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) kafka | [2025-06-13 14:57:11,129] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) kafka | [2025-06-13 14:57:11,129] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) kafka | [2025-06-13 14:57:11,130] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) kafka | [2025-06-13 14:57:11,130] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) kafka | [2025-06-13 14:57:11,133] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) kafka | [2025-06-13 14:57:11,143] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-13 14:57:11,145] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-13 14:57:11,154] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-13 14:57:11,154] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-13 14:57:11,155] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) kafka | [2025-06-13 14:57:11,157] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) kafka | [2025-06-13 14:57:11,162] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) kafka | [2025-06-13 14:57:11,162] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) kafka | [2025-06-13 14:57:11,162] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) kafka | [2025-06-13 14:57:11,169] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) kafka | [2025-06-13 14:57:11,170] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) kafka | [2025-06-13 14:57:11,170] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) kafka | [2025-06-13 14:57:11,170] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) kafka | [2025-06-13 14:57:11,171] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) kafka | [2025-06-13 14:57:11,184] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) kafka | [2025-06-13 14:57:11,231] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-13 14:57:11,273] INFO [BrokerToControllerChannelManager broker=1 name=forwarding]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-13 14:57:11,312] INFO [BrokerToControllerChannelManager broker=1 name=alterPartition]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-13 14:57:16,185] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) kafka | [2025-06-13 14:57:16,186] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) kafka | [2025-06-13 14:57:42,539] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) kafka | [2025-06-13 14:57:42,548] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2025-06-13 14:57:42,549] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2025-06-13 14:57:42,557] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) kafka | [2025-06-13 14:57:42,596] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(KmlxgMzQQbaYMER6H0CRiQ),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(5-mwoVTETiqrL7f5O3IIJg),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) kafka | [2025-06-13 14:57:42,598] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) kafka | [2025-06-13 14:57:42,601] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:42,601] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:42,601] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:42,601] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:42,601] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:42,601] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:42,601] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:42,601] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:42,601] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:42,601] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:42,601] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:42,601] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:42,601] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:42,602] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:42,602] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:42,602] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:42,602] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:42,602] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:42,602] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:42,602] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:42,602] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:42,602] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:42,602] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:42,603] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:42,603] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:42,603] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:42,607] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:42,607] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:42,607] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:42,607] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:42,607] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:42,607] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:42,607] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:42,607] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:42,607] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:42,607] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:42,607] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:42,607] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:42,607] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:42,607] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:42,607] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:42,607] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:42,607] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:42,607] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:42,607] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:42,607] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:42,607] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:42,607] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:42,607] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:42,608] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:42,608] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:42,608] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-13 14:57:42,613] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:42,613] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:42,613] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:42,613] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:42,613] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:42,613] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:42,613] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:42,613] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:42,613] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:42,613] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:42,613] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:42,613] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:42,614] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:42,614] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:42,614] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:42,614] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:42,614] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:42,614] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:42,614] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:42,614] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:42,614] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:42,614] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:42,614] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:42,614] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:42,614] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:42,614] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:42,614] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:42,614] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:42,614] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:42,614] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:42,614] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:42,614] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:42,614] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:42,614] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:42,614] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:42,614] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:42,614] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:42,614] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:42,614] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:42,614] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:42,614] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:42,614] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:42,614] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:42,614] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:42,614] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:42,614] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:42,615] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:42,615] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:42,615] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:42,615] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:42,615] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:42,615] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-13 14:57:42,794] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:42,794] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:42,794] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:42,794] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:42,794] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:42,794] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:42,794] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:42,794] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:42,794] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:42,794] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:42,795] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:42,795] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:42,795] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:42,795] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:42,795] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:42,795] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:42,795] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:42,795] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:42,795] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:42,795] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:42,795] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:42,795] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:42,795] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:42,795] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:42,795] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:42,795] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:42,795] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:42,795] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:42,795] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:42,795] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:42,795] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:42,795] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:42,795] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:42,795] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:42,795] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:42,795] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:42,795] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:42,796] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:42,796] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:42,796] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:42,796] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:42,796] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:42,796] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:42,796] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:42,796] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:42,796] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:42,796] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:42,796] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:42,796] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:42,796] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:42,796] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:42,804] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) kafka | [2025-06-13 14:57:42,804] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) kafka | [2025-06-13 14:57:42,804] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) kafka | [2025-06-13 14:57:42,804] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) kafka | [2025-06-13 14:57:42,804] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) kafka | [2025-06-13 14:57:42,805] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) kafka | [2025-06-13 14:57:42,805] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) kafka | [2025-06-13 14:57:42,805] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) kafka | [2025-06-13 14:57:42,805] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) kafka | [2025-06-13 14:57:42,805] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) kafka | [2025-06-13 14:57:42,805] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) kafka | [2025-06-13 14:57:42,805] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) kafka | [2025-06-13 14:57:42,805] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) kafka | [2025-06-13 14:57:42,806] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) kafka | [2025-06-13 14:57:42,806] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) kafka | [2025-06-13 14:57:42,806] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) kafka | [2025-06-13 14:57:42,806] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) kafka | [2025-06-13 14:57:42,806] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) kafka | [2025-06-13 14:57:42,806] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) kafka | [2025-06-13 14:57:42,806] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) kafka | [2025-06-13 14:57:42,806] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) kafka | [2025-06-13 14:57:42,807] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) kafka | [2025-06-13 14:57:42,807] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) kafka | [2025-06-13 14:57:42,807] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) kafka | [2025-06-13 14:57:42,807] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) kafka | [2025-06-13 14:57:42,807] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) kafka | [2025-06-13 14:57:42,807] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) kafka | [2025-06-13 14:57:42,807] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) kafka | [2025-06-13 14:57:42,807] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) kafka | [2025-06-13 14:57:42,807] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) kafka | [2025-06-13 14:57:42,807] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) kafka | [2025-06-13 14:57:42,808] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) kafka | [2025-06-13 14:57:42,808] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) kafka | [2025-06-13 14:57:42,808] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) kafka | [2025-06-13 14:57:42,808] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) kafka | [2025-06-13 14:57:42,808] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) kafka | [2025-06-13 14:57:42,808] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) kafka | [2025-06-13 14:57:42,808] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) kafka | [2025-06-13 14:57:42,808] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) kafka | [2025-06-13 14:57:42,808] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) kafka | [2025-06-13 14:57:42,808] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) kafka | [2025-06-13 14:57:42,808] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) kafka | [2025-06-13 14:57:42,808] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) kafka | [2025-06-13 14:57:42,808] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) kafka | [2025-06-13 14:57:42,809] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) kafka | [2025-06-13 14:57:42,809] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) kafka | [2025-06-13 14:57:42,809] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) kafka | [2025-06-13 14:57:42,809] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) kafka | [2025-06-13 14:57:42,809] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) kafka | [2025-06-13 14:57:42,809] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) kafka | [2025-06-13 14:57:42,809] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) kafka | [2025-06-13 14:57:42,810] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) kafka | [2025-06-13 14:57:42,812] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) kafka | [2025-06-13 14:57:42,814] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:42,814] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:42,814] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:42,814] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:42,814] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:42,814] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:42,814] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:42,814] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:42,814] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:42,814] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:42,814] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:42,814] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:42,814] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:42,815] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:42,815] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:42,815] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:42,815] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:42,815] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:42,815] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:42,815] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:42,815] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:42,815] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:42,815] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:42,815] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:42,815] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:42,815] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:42,815] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:42,815] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:42,815] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:42,815] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:42,816] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:42,816] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:42,816] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:42,816] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:42,816] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:42,816] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:42,816] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:42,816] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:42,816] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:42,816] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:42,816] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:42,816] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:42,816] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:42,817] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:42,817] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:42,817] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:42,817] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:42,817] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:42,817] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:42,817] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:42,817] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:42,818] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-13 14:57:42,822] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) kafka | [2025-06-13 14:57:42,828] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:42,828] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:42,828] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:42,828] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:42,828] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:42,828] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:42,828] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:42,828] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:42,828] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:42,828] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:42,828] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:42,828] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:42,828] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:42,828] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:42,828] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:42,828] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:42,828] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:42,828] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:42,828] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:42,829] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:42,829] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:42,829] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:42,829] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:42,829] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:42,829] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:42,829] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:42,829] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:42,829] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:42,829] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:42,829] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:42,829] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:42,829] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:42,829] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:42,829] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:42,829] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:42,829] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:42,829] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:42,829] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:42,829] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:42,829] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:42,829] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:42,829] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:42,829] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:42,829] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:42,829] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:42,829] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:42,829] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:42,830] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:42,830] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:42,830] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:42,830] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:42,877] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2025-06-13 14:57:42,877] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2025-06-13 14:57:42,877] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2025-06-13 14:57:42,877] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2025-06-13 14:57:42,877] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2025-06-13 14:57:42,877] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2025-06-13 14:57:42,877] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2025-06-13 14:57:42,877] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2025-06-13 14:57:42,877] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2025-06-13 14:57:42,877] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2025-06-13 14:57:42,877] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2025-06-13 14:57:42,877] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2025-06-13 14:57:42,877] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2025-06-13 14:57:42,878] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2025-06-13 14:57:42,878] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2025-06-13 14:57:42,878] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2025-06-13 14:57:42,878] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2025-06-13 14:57:42,878] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2025-06-13 14:57:42,878] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2025-06-13 14:57:42,878] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2025-06-13 14:57:42,878] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2025-06-13 14:57:42,878] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2025-06-13 14:57:42,878] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2025-06-13 14:57:42,878] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2025-06-13 14:57:42,878] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2025-06-13 14:57:42,878] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2025-06-13 14:57:42,878] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2025-06-13 14:57:42,878] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2025-06-13 14:57:42,878] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2025-06-13 14:57:42,878] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2025-06-13 14:57:42,878] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2025-06-13 14:57:42,878] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2025-06-13 14:57:42,878] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2025-06-13 14:57:42,878] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2025-06-13 14:57:42,878] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2025-06-13 14:57:42,878] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2025-06-13 14:57:42,878] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2025-06-13 14:57:42,878] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2025-06-13 14:57:42,878] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2025-06-13 14:57:42,878] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2025-06-13 14:57:42,878] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2025-06-13 14:57:42,878] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2025-06-13 14:57:42,878] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2025-06-13 14:57:42,878] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2025-06-13 14:57:42,878] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2025-06-13 14:57:42,878] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2025-06-13 14:57:42,878] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2025-06-13 14:57:42,878] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2025-06-13 14:57:42,878] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2025-06-13 14:57:42,878] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2025-06-13 14:57:42,878] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2025-06-13 14:57:42,879] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) kafka | [2025-06-13 14:57:42,879] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) kafka | [2025-06-13 14:57:42,936] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:42,947] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:42,949] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:42,949] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:42,950] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(5-mwoVTETiqrL7f5O3IIJg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:42,975] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:42,978] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:42,982] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:42,982] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:42,983] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(5-mwoVTETiqrL7f5O3IIJg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:42,995] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:42,996] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:42,996] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:42,996] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:42,996] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(5-mwoVTETiqrL7f5O3IIJg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:43,004] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:43,005] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:43,005] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,005] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,006] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(5-mwoVTETiqrL7f5O3IIJg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:43,020] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:43,021] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:43,021] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,021] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,021] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(5-mwoVTETiqrL7f5O3IIJg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:43,030] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:43,031] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:43,031] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,031] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,031] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(5-mwoVTETiqrL7f5O3IIJg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:43,045] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:43,046] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:43,046] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,046] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,046] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(5-mwoVTETiqrL7f5O3IIJg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:43,056] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:43,056] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:43,056] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,057] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,057] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(5-mwoVTETiqrL7f5O3IIJg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:43,066] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:43,066] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:43,067] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,067] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,067] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(5-mwoVTETiqrL7f5O3IIJg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:43,078] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:43,079] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:43,079] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,080] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,080] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(5-mwoVTETiqrL7f5O3IIJg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:43,092] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:43,093] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:43,093] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,094] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,094] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(5-mwoVTETiqrL7f5O3IIJg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:43,101] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:43,102] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:43,102] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,102] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,102] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(5-mwoVTETiqrL7f5O3IIJg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:43,110] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:43,111] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:43,111] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,111] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,111] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(5-mwoVTETiqrL7f5O3IIJg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:43,117] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:43,117] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:43,117] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,118] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,118] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(5-mwoVTETiqrL7f5O3IIJg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:43,130] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:43,131] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:43,131] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,131] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,131] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(5-mwoVTETiqrL7f5O3IIJg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:43,140] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:43,141] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:43,141] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,141] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,141] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(5-mwoVTETiqrL7f5O3IIJg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:43,146] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:43,146] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:43,147] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,147] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,147] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(5-mwoVTETiqrL7f5O3IIJg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:43,153] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:43,153] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:43,153] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,153] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,153] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(5-mwoVTETiqrL7f5O3IIJg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:43,160] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:43,160] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:43,161] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,162] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,162] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(5-mwoVTETiqrL7f5O3IIJg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:43,168] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:43,169] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:43,169] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,169] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,169] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(5-mwoVTETiqrL7f5O3IIJg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:43,176] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:43,177] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:43,177] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,177] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,177] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(5-mwoVTETiqrL7f5O3IIJg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:43,190] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:43,190] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:43,190] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,190] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,191] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(5-mwoVTETiqrL7f5O3IIJg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:43,203] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:43,204] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:43,204] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,204] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,204] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(5-mwoVTETiqrL7f5O3IIJg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:43,214] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:43,215] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:43,215] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,215] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,215] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(5-mwoVTETiqrL7f5O3IIJg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:43,223] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:43,224] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:43,224] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,225] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,225] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(5-mwoVTETiqrL7f5O3IIJg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:43,235] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:43,235] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:43,235] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,235] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,235] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(5-mwoVTETiqrL7f5O3IIJg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:43,241] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:43,241] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:43,241] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,241] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,241] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(5-mwoVTETiqrL7f5O3IIJg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:43,253] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:43,254] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:43,254] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,254] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,254] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(5-mwoVTETiqrL7f5O3IIJg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:43,273] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:43,274] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:43,274] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,274] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,274] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(5-mwoVTETiqrL7f5O3IIJg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:43,280] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:43,281] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:43,281] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,281] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,281] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(5-mwoVTETiqrL7f5O3IIJg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:43,286] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:43,286] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:43,286] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,286] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,286] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(5-mwoVTETiqrL7f5O3IIJg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:43,292] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:43,292] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:43,292] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,292] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,292] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(5-mwoVTETiqrL7f5O3IIJg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:43,300] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:43,301] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:43,301] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,301] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,301] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(5-mwoVTETiqrL7f5O3IIJg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:43,307] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:43,307] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:43,307] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,307] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,307] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(5-mwoVTETiqrL7f5O3IIJg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:43,324] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:43,325] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) kafka | [2025-06-13 14:57:43,325] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,325] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,325] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(KmlxgMzQQbaYMER6H0CRiQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:43,337] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:43,338] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:43,338] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,338] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,338] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(5-mwoVTETiqrL7f5O3IIJg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:43,346] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:43,347] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:43,347] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,347] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,347] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(5-mwoVTETiqrL7f5O3IIJg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:43,359] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:43,359] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:43,359] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,359] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,359] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(5-mwoVTETiqrL7f5O3IIJg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:43,370] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:43,371] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:43,371] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,371] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,371] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(5-mwoVTETiqrL7f5O3IIJg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:43,380] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:43,381] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:43,381] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,381] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,382] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(5-mwoVTETiqrL7f5O3IIJg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:43,391] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:43,392] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:43,392] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,392] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,392] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(5-mwoVTETiqrL7f5O3IIJg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:43,400] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:43,401] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:43,401] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,401] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,401] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(5-mwoVTETiqrL7f5O3IIJg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:43,409] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:43,409] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:43,409] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,409] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,410] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(5-mwoVTETiqrL7f5O3IIJg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:43,416] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:43,416] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:43,416] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,416] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,417] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(5-mwoVTETiqrL7f5O3IIJg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:43,425] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:43,426] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:43,426] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,426] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,426] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(5-mwoVTETiqrL7f5O3IIJg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:43,436] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:43,437] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:43,437] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,437] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,437] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(5-mwoVTETiqrL7f5O3IIJg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:43,448] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:43,450] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:43,450] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,450] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,450] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(5-mwoVTETiqrL7f5O3IIJg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:43,459] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:43,460] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:43,460] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,460] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,460] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(5-mwoVTETiqrL7f5O3IIJg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:43,471] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:43,472] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:43,472] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,473] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,473] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(5-mwoVTETiqrL7f5O3IIJg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:43,481] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:43,481] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:43,481] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,481] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,482] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(5-mwoVTETiqrL7f5O3IIJg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:43,489] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:43,493] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:43,493] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,494] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:43,494] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(5-mwoVTETiqrL7f5O3IIJg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:43,501] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2025-06-13 14:57:43,502] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2025-06-13 14:57:43,502] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2025-06-13 14:57:43,502] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2025-06-13 14:57:43,502] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2025-06-13 14:57:43,502] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2025-06-13 14:57:43,502] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2025-06-13 14:57:43,502] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2025-06-13 14:57:43,502] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2025-06-13 14:57:43,502] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2025-06-13 14:57:43,502] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2025-06-13 14:57:43,502] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2025-06-13 14:57:43,502] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2025-06-13 14:57:43,502] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2025-06-13 14:57:43,502] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2025-06-13 14:57:43,502] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2025-06-13 14:57:43,502] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2025-06-13 14:57:43,502] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2025-06-13 14:57:43,502] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2025-06-13 14:57:43,502] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2025-06-13 14:57:43,502] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2025-06-13 14:57:43,502] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2025-06-13 14:57:43,502] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2025-06-13 14:57:43,502] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2025-06-13 14:57:43,502] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2025-06-13 14:57:43,502] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2025-06-13 14:57:43,502] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2025-06-13 14:57:43,502] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2025-06-13 14:57:43,502] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2025-06-13 14:57:43,502] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2025-06-13 14:57:43,502] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2025-06-13 14:57:43,502] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2025-06-13 14:57:43,502] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2025-06-13 14:57:43,502] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2025-06-13 14:57:43,502] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2025-06-13 14:57:43,502] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2025-06-13 14:57:43,502] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2025-06-13 14:57:43,502] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2025-06-13 14:57:43,502] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2025-06-13 14:57:43,502] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2025-06-13 14:57:43,502] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2025-06-13 14:57:43,502] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2025-06-13 14:57:43,502] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2025-06-13 14:57:43,502] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2025-06-13 14:57:43,502] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2025-06-13 14:57:43,502] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2025-06-13 14:57:43,502] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2025-06-13 14:57:43,502] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2025-06-13 14:57:43,502] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2025-06-13 14:57:43,502] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2025-06-13 14:57:43,502] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2025-06-13 14:57:43,507] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:43,511] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,513] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:43,516] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,516] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:43,516] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,516] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:43,516] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,516] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:43,516] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,516] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:43,516] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,516] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:43,516] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,516] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:43,516] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,516] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:43,516] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,516] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:43,516] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,516] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:43,516] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,516] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:43,516] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,516] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:43,516] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,516] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:43,516] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,516] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:43,516] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,516] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:43,516] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,516] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:43,516] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,516] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:43,516] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,516] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:43,516] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,516] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:43,516] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,516] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:43,516] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,517] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:43,517] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,517] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:43,517] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,517] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:43,517] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,517] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:43,517] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,517] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:43,517] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,517] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:43,517] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,517] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:43,517] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,517] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:43,517] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,517] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:43,517] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,517] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:43,517] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,517] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:43,517] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,517] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:43,517] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,517] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:43,517] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,517] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:43,517] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,517] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:43,517] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,517] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:43,517] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,517] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:43,517] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,517] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:43,517] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,517] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:43,517] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,517] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:43,517] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,517] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:43,517] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,517] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:43,517] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,517] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:43,517] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,517] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:43,517] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,517] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:43,517] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,517] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:43,517] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,517] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:43,517] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,517] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:43,517] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,518] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:43,518] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,520] INFO [Broker id=1] Finished LeaderAndIsr request in 698ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) kafka | [2025-06-13 14:57:43,521] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 9 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,523] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,523] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,523] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,523] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,523] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,523] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,524] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 8 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,524] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,524] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,524] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,524] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,524] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,524] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,524] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,524] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,524] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,524] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,525] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 9 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,525] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,525] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,525] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,525] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,525] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,525] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,525] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,525] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,525] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,525] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,526] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 9 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,526] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,526] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,526] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,526] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=5-mwoVTETiqrL7f5O3IIJg, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=KmlxgMzQQbaYMER6H0CRiQ, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-13 14:57:43,526] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,526] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,526] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,526] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,526] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,526] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,527] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,527] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,527] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,527] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,527] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,527] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,527] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,527] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,527] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,527] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,527] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:43,531] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:57:43,531] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:57:43,531] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:57:43,531] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:57:43,531] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:57:43,531] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:57:43,531] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:57:43,531] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:57:43,531] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:57:43,531] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:57:43,531] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:57:43,531] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:57:43,531] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:57:43,531] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:57:43,531] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:57:43,531] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:57:43,531] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:57:43,531] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:57:43,531] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:57:43,531] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:57:43,531] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:57:43,531] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:57:43,532] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:57:43,532] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:57:43,532] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:57:43,532] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:57:43,532] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:57:43,532] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:57:43,532] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:57:43,532] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:57:43,532] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:57:43,532] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:57:43,532] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:57:43,532] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:57:43,532] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:57:43,532] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:57:43,532] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:57:43,532] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:57:43,532] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:57:43,532] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:57:43,532] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:57:43,532] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:57:43,532] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:57:43,532] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:57:43,532] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:57:43,532] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:57:43,532] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:57:43,532] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:57:43,532] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:57:43,532] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:57:43,532] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:57:43,532] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:57:43,533] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-13 14:57:44,082] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-9b696c04-88cd-4397-866e-2eea30bf114e and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:44,093] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-9b696c04-88cd-4397-866e-2eea30bf114e with group instance id None; client reason: need to re-join with the given member-id: consumer-policy-pap-4-9b696c04-88cd-4397-866e-2eea30bf114e) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:44,255] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 975bac0f-f06c-4130-aa36-5c2192193718 in Empty state. Created a new member id consumer-975bac0f-f06c-4130-aa36-5c2192193718-2-d060181d-fd17-4ff9-ab46-0090076a66d8 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:44,258] INFO [GroupCoordinator 1]: Preparing to rebalance group 975bac0f-f06c-4130-aa36-5c2192193718 in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-975bac0f-f06c-4130-aa36-5c2192193718-2-d060181d-fd17-4ff9-ab46-0090076a66d8 with group instance id None; client reason: need to re-join with the given member-id: consumer-975bac0f-f06c-4130-aa36-5c2192193718-2-d060181d-fd17-4ff9-ab46-0090076a66d8) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:44,397] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 4240e27b-a45a-49d2-9890-b061568ad8c5 in Empty state. Created a new member id consumer-4240e27b-a45a-49d2-9890-b061568ad8c5-3-70911273-18fe-43c3-a086-9de441da481b and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:44,399] INFO [GroupCoordinator 1]: Preparing to rebalance group 4240e27b-a45a-49d2-9890-b061568ad8c5 in state PreparingRebalance with old generation 0 (__consumer_offsets-3) (reason: Adding new member consumer-4240e27b-a45a-49d2-9890-b061568ad8c5-3-70911273-18fe-43c3-a086-9de441da481b with group instance id None; client reason: need to re-join with the given member-id: consumer-4240e27b-a45a-49d2-9890-b061568ad8c5-3-70911273-18fe-43c3-a086-9de441da481b) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:47,108] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:47,134] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-9b696c04-88cd-4397-866e-2eea30bf114e for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:47,259] INFO [GroupCoordinator 1]: Stabilized group 975bac0f-f06c-4130-aa36-5c2192193718 generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:47,275] INFO [GroupCoordinator 1]: Assignment received from leader consumer-975bac0f-f06c-4130-aa36-5c2192193718-2-d060181d-fd17-4ff9-ab46-0090076a66d8 for group 975bac0f-f06c-4130-aa36-5c2192193718 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:47,400] INFO [GroupCoordinator 1]: Stabilized group 4240e27b-a45a-49d2-9890-b061568ad8c5 generation 1 (__consumer_offsets-3) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:47,406] INFO [GroupCoordinator 1]: Assignment received from leader consumer-4240e27b-a45a-49d2-9890-b061568ad8c5-3-70911273-18fe-43c3-a086-9de441da481b for group 4240e27b-a45a-49d2-9890-b061568ad8c5 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) policy-apex-pdp | Waiting for kafka port 9092... policy-apex-pdp | kafka (172.17.0.7:9092) open policy-apex-pdp | Waiting for pap port 6969... policy-apex-pdp | pap (172.17.0.10:6969) open policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' policy-apex-pdp | [2025-06-13T14:57:43.277+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] policy-apex-pdp | [2025-06-13T14:57:43.500+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-apex-pdp | allow.auto.create.topics = true policy-apex-pdp | auto.commit.interval.ms = 5000 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | auto.offset.reset = latest policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | check.crcs = true policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = consumer-975bac0f-f06c-4130-aa36-5c2192193718-1 policy-apex-pdp | client.rack = policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | default.api.timeout.ms = 60000 policy-apex-pdp | enable.auto.commit = true policy-apex-pdp | enable.metrics.push = true policy-apex-pdp | exclude.internal.topics = true policy-apex-pdp | fetch.max.bytes = 52428800 policy-apex-pdp | fetch.max.wait.ms = 500 policy-apex-pdp | fetch.min.bytes = 1 policy-apex-pdp | group.id = 975bac0f-f06c-4130-aa36-5c2192193718 policy-apex-pdp | group.instance.id = null policy-apex-pdp | group.protocol = classic policy-apex-pdp | group.remote.assignor = null policy-apex-pdp | heartbeat.interval.ms = 3000 policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | internal.leave.group.on.close = true policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-apex-pdp | isolation.level = read_uncommitted policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | max.partition.fetch.bytes = 1048576 policy-apex-pdp | max.poll.interval.ms = 300000 policy-apex-pdp | max.poll.records = 500 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metadata.recovery.strategy = none policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-apex-pdp | receive.buffer.bytes = 65536 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retry.backoff.max.ms = 1000 policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.callback.handler.class = null policy-apex-pdp | sasl.login.class = null policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-apex-pdp | sasl.login.read.timeout.ms = null policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | sasl.mechanism = GSSAPI policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | sasl.oauthbearer.header.urlencode = false policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | security.protocol = PLAINTEXT policy-apex-pdp | security.providers = null policy-apex-pdp | send.buffer.bytes = 131072 policy-apex-pdp | session.timeout.ms = 45000 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | ssl.cipher.suites = null policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-apex-pdp | ssl.engine.factory.class = null policy-apex-pdp | ssl.key.password = null policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | ssl.keystore.certificate.chain = null policy-apex-pdp | ssl.keystore.key = null policy-apex-pdp | ssl.keystore.location = null policy-apex-pdp | ssl.keystore.password = null policy-apex-pdp | ssl.keystore.type = JKS policy-apex-pdp | ssl.protocol = TLSv1.3 policy-apex-pdp | ssl.provider = null policy-apex-pdp | ssl.secure.random.implementation = null policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | ssl.truststore.certificates = null policy-apex-pdp | ssl.truststore.location = null policy-apex-pdp | ssl.truststore.password = null policy-apex-pdp | ssl.truststore.type = JKS policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | policy-apex-pdp | [2025-06-13T14:57:43.554+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-apex-pdp | [2025-06-13T14:57:43.695+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-apex-pdp | [2025-06-13T14:57:43.696+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-apex-pdp | [2025-06-13T14:57:43.696+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749826663694 policy-apex-pdp | [2025-06-13T14:57:43.698+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-975bac0f-f06c-4130-aa36-5c2192193718-1, groupId=975bac0f-f06c-4130-aa36-5c2192193718] Subscribed to topic(s): policy-pdp-pap policy-apex-pdp | [2025-06-13T14:57:43.717+00:00|INFO|ServiceManager|main] service manager starting policy-apex-pdp | [2025-06-13T14:57:43.718+00:00|INFO|ServiceManager|main] service manager starting topics policy-apex-pdp | [2025-06-13T14:57:43.719+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=975bac0f-f06c-4130-aa36-5c2192193718, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting policy-apex-pdp | [2025-06-13T14:57:43.737+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-apex-pdp | allow.auto.create.topics = true policy-apex-pdp | auto.commit.interval.ms = 5000 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | auto.offset.reset = latest policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | check.crcs = true policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = consumer-975bac0f-f06c-4130-aa36-5c2192193718-2 policy-apex-pdp | client.rack = policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | default.api.timeout.ms = 60000 policy-apex-pdp | enable.auto.commit = true policy-apex-pdp | enable.metrics.push = true policy-apex-pdp | exclude.internal.topics = true policy-apex-pdp | fetch.max.bytes = 52428800 policy-apex-pdp | fetch.max.wait.ms = 500 policy-apex-pdp | fetch.min.bytes = 1 policy-apex-pdp | group.id = 975bac0f-f06c-4130-aa36-5c2192193718 policy-apex-pdp | group.instance.id = null policy-apex-pdp | group.protocol = classic policy-apex-pdp | group.remote.assignor = null policy-apex-pdp | heartbeat.interval.ms = 3000 policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | internal.leave.group.on.close = true policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-apex-pdp | isolation.level = read_uncommitted policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | max.partition.fetch.bytes = 1048576 policy-apex-pdp | max.poll.interval.ms = 300000 policy-apex-pdp | max.poll.records = 500 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metadata.recovery.strategy = none policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-apex-pdp | receive.buffer.bytes = 65536 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retry.backoff.max.ms = 1000 policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.callback.handler.class = null policy-apex-pdp | sasl.login.class = null policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-apex-pdp | sasl.login.read.timeout.ms = null policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | sasl.mechanism = GSSAPI policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | sasl.oauthbearer.header.urlencode = false policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | security.protocol = PLAINTEXT policy-apex-pdp | security.providers = null policy-apex-pdp | send.buffer.bytes = 131072 policy-apex-pdp | session.timeout.ms = 45000 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | ssl.cipher.suites = null policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-apex-pdp | ssl.engine.factory.class = null policy-apex-pdp | ssl.key.password = null policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | ssl.keystore.certificate.chain = null policy-apex-pdp | ssl.keystore.key = null policy-apex-pdp | ssl.keystore.location = null policy-apex-pdp | ssl.keystore.password = null policy-apex-pdp | ssl.keystore.type = JKS policy-apex-pdp | ssl.protocol = TLSv1.3 policy-apex-pdp | ssl.provider = null policy-apex-pdp | ssl.secure.random.implementation = null policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | ssl.truststore.certificates = null policy-apex-pdp | ssl.truststore.location = null policy-apex-pdp | ssl.truststore.password = null policy-apex-pdp | ssl.truststore.type = JKS policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | policy-apex-pdp | [2025-06-13T14:57:43.738+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-apex-pdp | [2025-06-13T14:57:43.750+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-apex-pdp | [2025-06-13T14:57:43.750+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-apex-pdp | [2025-06-13T14:57:43.750+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749826663750 policy-apex-pdp | [2025-06-13T14:57:43.751+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-975bac0f-f06c-4130-aa36-5c2192193718-2, groupId=975bac0f-f06c-4130-aa36-5c2192193718] Subscribed to topic(s): policy-pdp-pap policy-apex-pdp | [2025-06-13T14:57:43.751+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=6a1680f4-f252-4d65-98ae-0c37ef96efd5, alive=false, publisher=null]]: starting policy-apex-pdp | [2025-06-13T14:57:43.762+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-apex-pdp | acks = -1 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | batch.size = 16384 policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | buffer.memory = 33554432 policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = producer-1 policy-apex-pdp | compression.gzip.level = -1 policy-apex-pdp | compression.lz4.level = 9 policy-apex-pdp | compression.type = none policy-apex-pdp | compression.zstd.level = 3 policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | delivery.timeout.ms = 120000 policy-apex-pdp | enable.idempotence = true policy-apex-pdp | enable.metrics.push = true policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-apex-pdp | linger.ms = 0 policy-apex-pdp | max.block.ms = 60000 policy-apex-pdp | max.in.flight.requests.per.connection = 5 policy-apex-pdp | max.request.size = 1048576 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metadata.max.idle.ms = 300000 policy-apex-pdp | metadata.recovery.strategy = none policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partitioner.adaptive.partitioning.enable = true policy-apex-pdp | partitioner.availability.timeout.ms = 0 policy-apex-pdp | partitioner.class = null policy-apex-pdp | partitioner.ignore.keys = false policy-apex-pdp | receive.buffer.bytes = 32768 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retries = 2147483647 policy-apex-pdp | retry.backoff.max.ms = 1000 policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.callback.handler.class = null policy-apex-pdp | sasl.login.class = null policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-apex-pdp | sasl.login.read.timeout.ms = null policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | sasl.mechanism = GSSAPI policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | sasl.oauthbearer.header.urlencode = false policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | security.protocol = PLAINTEXT policy-apex-pdp | security.providers = null policy-apex-pdp | send.buffer.bytes = 131072 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | ssl.cipher.suites = null policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-apex-pdp | ssl.engine.factory.class = null policy-apex-pdp | ssl.key.password = null policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | ssl.keystore.certificate.chain = null policy-apex-pdp | ssl.keystore.key = null policy-apex-pdp | ssl.keystore.location = null policy-apex-pdp | ssl.keystore.password = null policy-apex-pdp | ssl.keystore.type = JKS policy-apex-pdp | ssl.protocol = TLSv1.3 policy-apex-pdp | ssl.provider = null policy-apex-pdp | ssl.secure.random.implementation = null policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | ssl.truststore.certificates = null policy-apex-pdp | ssl.truststore.location = null policy-apex-pdp | ssl.truststore.password = null policy-apex-pdp | ssl.truststore.type = JKS policy-apex-pdp | transaction.timeout.ms = 60000 policy-apex-pdp | transactional.id = null policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-apex-pdp | policy-apex-pdp | [2025-06-13T14:57:43.763+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-apex-pdp | [2025-06-13T14:57:43.780+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-apex-pdp | [2025-06-13T14:57:43.796+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-apex-pdp | [2025-06-13T14:57:43.796+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-apex-pdp | [2025-06-13T14:57:43.796+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749826663796 policy-apex-pdp | [2025-06-13T14:57:43.797+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=6a1680f4-f252-4d65-98ae-0c37ef96efd5, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-apex-pdp | [2025-06-13T14:57:43.797+00:00|INFO|ServiceManager|main] service manager starting set alive policy-apex-pdp | [2025-06-13T14:57:43.797+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object policy-apex-pdp | [2025-06-13T14:57:43.799+00:00|INFO|ServiceManager|main] service manager starting topic sinks policy-apex-pdp | [2025-06-13T14:57:43.799+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher policy-apex-pdp | [2025-06-13T14:57:43.801+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener policy-apex-pdp | [2025-06-13T14:57:43.801+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher policy-apex-pdp | [2025-06-13T14:57:43.801+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher policy-apex-pdp | [2025-06-13T14:57:43.801+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=975bac0f-f06c-4130-aa36-5c2192193718, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@4c168660 policy-apex-pdp | [2025-06-13T14:57:43.801+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=975bac0f-f06c-4130-aa36-5c2192193718, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted policy-apex-pdp | [2025-06-13T14:57:43.801+00:00|INFO|ServiceManager|main] service manager starting Create REST server policy-apex-pdp | [2025-06-13T14:57:43.814+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: policy-apex-pdp | [] policy-apex-pdp | [2025-06-13T14:57:43.816+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"397da19d-aa20-4917-853e-76333df1940e","timestampMs":1749826663801,"name":"apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a","pdpGroup":"defaultGroup"} policy-apex-pdp | [2025-06-13T14:57:44.057+00:00|INFO|ServiceManager|main] service manager starting Rest Server policy-apex-pdp | [2025-06-13T14:57:44.057+00:00|INFO|ServiceManager|main] service manager starting policy-apex-pdp | [2025-06-13T14:57:44.057+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters policy-apex-pdp | [2025-06-13T14:57:44.057+00:00|INFO|JettyServletServer|main] JettyJerseyServer [JerseyServlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-4e7095ac==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@359c841e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64d7b720==org.glassfish.jersey.servlet.ServletContainer@2f1718b2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=oejs.Server@109f5dd8{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@415e0bcb{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@194152cf{STOPPED}}, connector=RestServerParameters@49d98dc5{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-4e7095ac==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@359c841e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64d7b720==org.glassfish.jersey.servlet.ServletContainer@2f1718b2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: STARTING policy-apex-pdp | [2025-06-13T14:57:44.080+00:00|INFO|ServiceManager|main] service manager started policy-apex-pdp | [2025-06-13T14:57:44.081+00:00|INFO|ServiceManager|main] service manager started policy-apex-pdp | [2025-06-13T14:57:44.081+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. policy-apex-pdp | [2025-06-13T14:57:44.081+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [JerseyServlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-4e7095ac==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@359c841e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64d7b720==org.glassfish.jersey.servlet.ServletContainer@2f1718b2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=oejs.Server@109f5dd8{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@415e0bcb{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@194152cf{STOPPED}}, connector=RestServerParameters@49d98dc5{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-4e7095ac==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@359c841e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64d7b720==org.glassfish.jersey.servlet.ServletContainer@2f1718b2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: RUN policy-apex-pdp | [2025-06-13T14:57:44.211+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: avZVZcYzSMyVRlkHApEtCg policy-apex-pdp | [2025-06-13T14:57:44.211+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-975bac0f-f06c-4130-aa36-5c2192193718-2, groupId=975bac0f-f06c-4130-aa36-5c2192193718] Cluster ID: avZVZcYzSMyVRlkHApEtCg policy-apex-pdp | [2025-06-13T14:57:44.213+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 policy-apex-pdp | [2025-06-13T14:57:44.223+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-975bac0f-f06c-4130-aa36-5c2192193718-2, groupId=975bac0f-f06c-4130-aa36-5c2192193718] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-apex-pdp | [2025-06-13T14:57:44.230+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-975bac0f-f06c-4130-aa36-5c2192193718-2, groupId=975bac0f-f06c-4130-aa36-5c2192193718] (Re-)joining group policy-apex-pdp | [2025-06-13T14:57:44.256+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-975bac0f-f06c-4130-aa36-5c2192193718-2, groupId=975bac0f-f06c-4130-aa36-5c2192193718] Request joining group due to: need to re-join with the given member-id: consumer-975bac0f-f06c-4130-aa36-5c2192193718-2-d060181d-fd17-4ff9-ab46-0090076a66d8 policy-apex-pdp | [2025-06-13T14:57:44.256+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-975bac0f-f06c-4130-aa36-5c2192193718-2, groupId=975bac0f-f06c-4130-aa36-5c2192193718] (Re-)joining group policy-apex-pdp | [2025-06-13T14:57:44.638+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls policy-apex-pdp | [2025-06-13T14:57:44.639+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls policy-apex-pdp | [2025-06-13T14:57:47.261+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-975bac0f-f06c-4130-aa36-5c2192193718-2, groupId=975bac0f-f06c-4130-aa36-5c2192193718] Successfully joined group with generation Generation{generationId=1, memberId='consumer-975bac0f-f06c-4130-aa36-5c2192193718-2-d060181d-fd17-4ff9-ab46-0090076a66d8', protocol='range'} policy-apex-pdp | [2025-06-13T14:57:47.271+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-975bac0f-f06c-4130-aa36-5c2192193718-2, groupId=975bac0f-f06c-4130-aa36-5c2192193718] Finished assignment for group at generation 1: {consumer-975bac0f-f06c-4130-aa36-5c2192193718-2-d060181d-fd17-4ff9-ab46-0090076a66d8=Assignment(partitions=[policy-pdp-pap-0])} policy-apex-pdp | [2025-06-13T14:57:47.278+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-975bac0f-f06c-4130-aa36-5c2192193718-2, groupId=975bac0f-f06c-4130-aa36-5c2192193718] Successfully synced group in generation Generation{generationId=1, memberId='consumer-975bac0f-f06c-4130-aa36-5c2192193718-2-d060181d-fd17-4ff9-ab46-0090076a66d8', protocol='range'} policy-apex-pdp | [2025-06-13T14:57:47.279+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-975bac0f-f06c-4130-aa36-5c2192193718-2, groupId=975bac0f-f06c-4130-aa36-5c2192193718] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-apex-pdp | [2025-06-13T14:57:47.280+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-975bac0f-f06c-4130-aa36-5c2192193718-2, groupId=975bac0f-f06c-4130-aa36-5c2192193718] Adding newly assigned partitions: policy-pdp-pap-0 policy-apex-pdp | [2025-06-13T14:57:47.288+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-975bac0f-f06c-4130-aa36-5c2192193718-2, groupId=975bac0f-f06c-4130-aa36-5c2192193718] Found no committed offset for partition policy-pdp-pap-0 policy-apex-pdp | [2025-06-13T14:57:47.302+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-975bac0f-f06c-4130-aa36-5c2192193718-2, groupId=975bac0f-f06c-4130-aa36-5c2192193718] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-apex-pdp | [2025-06-13T14:57:56.190+00:00|INFO|RequestLog|qtp1089680530-33] 172.17.0.2 - policyadmin [13/Jun/2025:14:57:56 +0000] "GET /metrics HTTP/1.1" 200 1928 "" "Prometheus/3.4.1" policy-apex-pdp | [2025-06-13T14:58:03.802+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"154aba81-ec7c-4a4d-a738-a978c53e911f","timestampMs":1749826683802,"name":"apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a","pdpGroup":"defaultGroup"} policy-apex-pdp | [2025-06-13T14:58:03.832+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"154aba81-ec7c-4a4d-a738-a978c53e911f","timestampMs":1749826683802,"name":"apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a","pdpGroup":"defaultGroup"} policy-apex-pdp | [2025-06-13T14:58:03.834+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2025-06-13T14:58:03.965+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"source":"pap-4074f89f-5c63-4e97-bd61-d581468759b3","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"c236035c-de19-4264-a109-9aaf7521026a","timestampMs":1749826683913,"name":"apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2025-06-13T14:58:03.987+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"acbb0f56-eaef-4efc-aa97-b69ada0977eb","timestampMs":1749826683987,"name":"apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a","pdpGroup":"defaultGroup"} policy-apex-pdp | [2025-06-13T14:58:03.987+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher policy-apex-pdp | [2025-06-13T14:58:03.990+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"c236035c-de19-4264-a109-9aaf7521026a","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"1b5542ae-dccd-4605-b26a-2e3211ab50e5","timestampMs":1749826683989,"name":"apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2025-06-13T14:58:04.007+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"acbb0f56-eaef-4efc-aa97-b69ada0977eb","timestampMs":1749826683987,"name":"apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a","pdpGroup":"defaultGroup"} policy-apex-pdp | [2025-06-13T14:58:04.008+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2025-06-13T14:58:04.013+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"c236035c-de19-4264-a109-9aaf7521026a","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"1b5542ae-dccd-4605-b26a-2e3211ab50e5","timestampMs":1749826683989,"name":"apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2025-06-13T14:58:04.013+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2025-06-13T14:58:04.044+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"source":"pap-4074f89f-5c63-4e97-bd61-d581468759b3","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"4a2b2ad7-3534-44cc-b7fe-5409f130a16c","timestampMs":1749826683913,"name":"apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2025-06-13T14:58:04.046+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"4a2b2ad7-3534-44cc-b7fe-5409f130a16c","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"74aeb02e-dd98-4a50-98d1-95cb4c196af5","timestampMs":1749826684046,"name":"apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2025-06-13T14:58:04.055+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"4a2b2ad7-3534-44cc-b7fe-5409f130a16c","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"74aeb02e-dd98-4a50-98d1-95cb4c196af5","timestampMs":1749826684046,"name":"apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2025-06-13T14:58:04.056+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2025-06-13T14:58:04.096+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"source":"pap-4074f89f-5c63-4e97-bd61-d581468759b3","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"203dc3d2-5143-4e60-824c-0790dde3f946","timestampMs":1749826684064,"name":"apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2025-06-13T14:58:04.098+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"203dc3d2-5143-4e60-824c-0790dde3f946","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"6e16b126-90fd-4a55-bfdd-e90ef3e7b7f5","timestampMs":1749826684097,"name":"apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2025-06-13T14:58:04.110+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"203dc3d2-5143-4e60-824c-0790dde3f946","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"6e16b126-90fd-4a55-bfdd-e90ef3e7b7f5","timestampMs":1749826684097,"name":"apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2025-06-13T14:58:04.110+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2025-06-13T14:58:10.294+00:00|INFO|RequestLog|qtp1089680530-28] 172.17.0.1 - - [13/Jun/2025:14:58:10 +0000] "GET / HTTP/1.1" 401 423 "" "curl/7.58.0" policy-apex-pdp | [2025-06-13T14:58:30.352+00:00|INFO|RequestLog|qtp1089680530-29] 172.17.0.1 - policyadmin [13/Jun/2025:14:58:30 +0000] "GET /policy/apex-pdp/v1/healthcheck HTTP/1.1" 200 109 "" "curl/7.58.0" policy-apex-pdp | [2025-06-13T14:58:56.087+00:00|INFO|RequestLog|qtp1089680530-26] 172.17.0.2 - policyadmin [13/Jun/2025:14:58:56 +0000] "GET /metrics HTTP/1.1" 200 2059 "" "Prometheus/3.4.1" policy-apex-pdp | [2025-06-13T14:59:56.084+00:00|INFO|RequestLog|qtp1089680530-27] 172.17.0.2 - policyadmin [13/Jun/2025:14:59:56 +0000] "GET /metrics HTTP/1.1" 200 2059 "" "Prometheus/3.4.1" policy-apex-pdp | [2025-06-13T15:00:03.988+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","policies":[],"messageName":"PDP_STATUS","requestId":"a3dca398-6852-4742-8276-c0b15eab384a","timestampMs":1749826803987,"name":"apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2025-06-13T15:00:04.001+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","policies":[],"messageName":"PDP_STATUS","requestId":"a3dca398-6852-4742-8276-c0b15eab384a","timestampMs":1749826803987,"name":"apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2025-06-13T15:00:04.001+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-api | Waiting for policy-db-migrator port 6824... policy-api | policy-db-migrator (172.17.0.8:6824) open policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml policy-api | policy-api | . ____ _ __ _ _ policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / policy-api | =========|_|==============|___/=/_/_/_/ policy-api | policy-api | :: Spring Boot :: (v3.4.6) policy-api | policy-api | [2025-06-13T14:57:21.656+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.2.Final policy-api | [2025-06-13T14:57:21.735+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.15 with PID 32 (/app/api.jar started by policy in /opt/app/policy/api/bin) policy-api | [2025-06-13T14:57:21.736+00:00|INFO|PolicyApiApplication|main] The following 1 profile is active: "default" policy-api | [2025-06-13T14:57:23.136+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-api | [2025-06-13T14:57:23.295+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 148 ms. Found 6 JPA repository interfaces. policy-api | [2025-06-13T14:57:23.974+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port 6969 (http) policy-api | [2025-06-13T14:57:23.988+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-api | [2025-06-13T14:57:23.990+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-api | [2025-06-13T14:57:23.990+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.41] policy-api | [2025-06-13T14:57:24.027+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext policy-api | [2025-06-13T14:57:24.028+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2231 ms policy-api | [2025-06-13T14:57:24.342+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-api | [2025-06-13T14:57:24.425+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.6.16.Final policy-api | [2025-06-13T14:57:24.473+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-api | [2025-06-13T14:57:24.876+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-api | [2025-06-13T14:57:24.917+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-api | [2025-06-13T14:57:25.112+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection@6ba226cd policy-api | [2025-06-13T14:57:25.114+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-api | [2025-06-13T14:57:25.199+00:00|INFO|pooling|main] HHH10001005: Database info: policy-api | Database JDBC URL [Connecting through datasource 'HikariDataSource (HikariPool-1)'] policy-api | Database driver: undefined/unknown policy-api | Database version: 16.4 policy-api | Autocommit mode: undefined/unknown policy-api | Isolation level: undefined/unknown policy-api | Minimum pool size: undefined/unknown policy-api | Maximum pool size: undefined/unknown policy-api | [2025-06-13T14:57:27.085+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-api | [2025-06-13T14:57:27.088+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-api | [2025-06-13T14:57:27.697+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml policy-api | [2025-06-13T14:57:28.557+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] policy-api | [2025-06-13T14:57:29.629+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-api | [2025-06-13T14:57:29.674+00:00|INFO|InitializeUserDetailsBeanManagerConfigurer$InitializeUserDetailsManagerConfigurer|main] Global AuthenticationManager configured with UserDetailsService bean with name inMemoryUserDetailsManager policy-api | [2025-06-13T14:57:30.282+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoints beneath base path '' policy-api | [2025-06-13T14:57:30.414+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-api | [2025-06-13T14:57:30.440+00:00|INFO|TomcatWebServer|main] Tomcat started on port 6969 (http) with context path '/policy/api/v1' policy-api | [2025-06-13T14:57:30.465+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 9.443 seconds (process running for 10.009) policy-api | [2025-06-13T14:57:39.931+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-api | [2025-06-13T14:57:39.931+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' policy-api | [2025-06-13T14:57:39.932+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 1 ms policy-api | [2025-06-13T14:59:14.638+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-5] ***** OrderedServiceImpl implementers: policy-api | [] policy-csit | Invoking the robot tests from: pap-test.robot pap-slas.robot policy-csit | Run Robot test policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates policy-csit | -v POLICY_API_IP:policy-api:6969 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 policy-csit | -v APEX_IP:policy-apex-pdp:6969 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 policy-csit | -v KAFKA_IP:kafka:9092 policy-csit | -v PROMETHEUS_IP:prometheus:9090 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 policy-csit | -v POLICY_OPA_IP:policy-opa-pdp:8282 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 policy-csit | -v TEMP_FOLDER:/tmp/distribution policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 policy-csit | -v TEST_ENV:docker policy-csit | -v JAEGER_IP:jaeger:16686 policy-csit | Starting Robot test suites ... policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas.Pap-Test policy-csit | ============================================================================== policy-csit | LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | LoadNodeTemplates :: Create node templates in database using speci... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Healthcheck :: Verify policy pap health check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Consolidated Healthcheck :: Verify policy consolidated health check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Metrics :: Verify policy pap is exporting prometheus metrics | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | DeployPdpGroups :: Deploy policies in PdpGroups | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Pap-Test & Pap-Slas.Pap-Test | PASS | policy-csit | 22 tests, 22 passed, 0 failed policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas.Pap-Slas policy-csit | ============================================================================== policy-csit | WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Pap-Test & Pap-Slas.Pap-Slas | PASS | policy-csit | 8 tests, 8 passed, 0 failed policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas | PASS | policy-csit | 30 tests, 30 passed, 0 failed policy-csit | ============================================================================== policy-csit | Output: /tmp/results/output.xml policy-csit | Log: /tmp/results/log.html policy-csit | Report: /tmp/results/report.html policy-csit | RESULT: 0 policy-db-migrator | Waiting for postgres port 5432... policy-db-migrator | nc: connect to postgres (172.17.0.3) port 5432 (tcp) failed: Connection refused policy-db-migrator | nc: connect to postgres (172.17.0.3) port 5432 (tcp) failed: Connection refused policy-db-migrator | Connection to postgres (172.17.0.3) 5432 port [tcp/postgresql] succeeded! policy-db-migrator | Initializing policyadmin... policy-db-migrator | 321 blocks policy-db-migrator | Preparing upgrade release version: 0800 policy-db-migrator | Preparing upgrade release version: 0900 policy-db-migrator | Preparing upgrade release version: 1000 policy-db-migrator | Preparing upgrade release version: 1100 policy-db-migrator | Preparing upgrade release version: 1200 policy-db-migrator | Preparing upgrade release version: 1300 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | -------------+--------- policy-db-migrator | policyadmin | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "policyadmin_schema_changelog" already exists, skipping policy-db-migrator | upgrade: 0 -> 1300 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0450-pdpgroup.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0460-pdppolicystatus.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0470-pdp.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0480-pdpstatistics.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-pdpsubgroup.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0570-toscadatatype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0580-toscadatatypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-toscanodetemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0610-toscanodetemplates.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0630-toscanodetype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0640-toscanodetypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0660-toscaparameter.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0670-toscapolicies.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0690-toscapolicy.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-toscapolicytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0710-toscapolicytypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0730-toscaproperty.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0770-toscarequirement.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0780-toscarequirements.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0820-toscatrigger.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-pdp.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-pdpstatistics.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0150-pdpstatistics.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0210-sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0220-sequence.sql policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-toscatrigger.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0140-toscaparameter.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0150-toscaproperty.sql policy-db-migrator | DROP TABLE policy-db-migrator | DROP TABLE policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-upgrade.sql policy-db-migrator | msg policy-db-migrator | --------------------------- policy-db-migrator | upgrade to 1100 completed policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-db-migrator | DROP INDEX policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-audit_sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-statistics_sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-pdpstatistics.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-statistics_sequence.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | policyadmin: OK: upgrade (1300) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "policyadmin_schema_changelog" already exists, skipping policy-db-migrator | name | version policy-db-migrator | -------------+--------- policy-db-migrator | policyadmin | 1300 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | -----+---------------------------------------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-jpapdpgroup_properties.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:09.048035 policy-db-migrator | 2 | 0110-jpapdpstatistics_enginestats.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:09.096826 policy-db-migrator | 3 | 0120-jpapdpsubgroup_policies.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:09.1507 policy-db-migrator | 4 | 0130-jpapdpsubgroup_properties.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:09.194794 policy-db-migrator | 5 | 0140-jpapdpsubgroup_supportedpolicytypes.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:09.239039 policy-db-migrator | 6 | 0150-jpatoscacapabilityassignment_attributes.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:09.295299 policy-db-migrator | 7 | 0160-jpatoscacapabilityassignment_metadata.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:09.360139 policy-db-migrator | 8 | 0170-jpatoscacapabilityassignment_occurrences.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:09.400697 policy-db-migrator | 9 | 0180-jpatoscacapabilityassignment_properties.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:09.455233 policy-db-migrator | 10 | 0190-jpatoscacapabilitytype_metadata.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:09.509753 policy-db-migrator | 11 | 0200-jpatoscacapabilitytype_properties.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:09.560415 policy-db-migrator | 12 | 0210-jpatoscadatatype_constraints.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:09.619825 policy-db-migrator | 13 | 0220-jpatoscadatatype_metadata.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:09.669911 policy-db-migrator | 14 | 0230-jpatoscadatatype_properties.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:09.718526 policy-db-migrator | 15 | 0240-jpatoscanodetemplate_metadata.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:09.764468 policy-db-migrator | 16 | 0250-jpatoscanodetemplate_properties.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:09.808031 policy-db-migrator | 17 | 0260-jpatoscanodetype_metadata.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:09.865026 policy-db-migrator | 18 | 0270-jpatoscanodetype_properties.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:09.907546 policy-db-migrator | 19 | 0280-jpatoscapolicy_metadata.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:09.960547 policy-db-migrator | 20 | 0290-jpatoscapolicy_properties.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:10.015798 policy-db-migrator | 21 | 0300-jpatoscapolicy_targets.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:10.069069 policy-db-migrator | 22 | 0310-jpatoscapolicytype_metadata.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:10.121777 policy-db-migrator | 23 | 0320-jpatoscapolicytype_properties.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:10.17117 policy-db-migrator | 24 | 0330-jpatoscapolicytype_targets.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:10.220463 policy-db-migrator | 25 | 0340-jpatoscapolicytype_triggers.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:10.270956 policy-db-migrator | 26 | 0350-jpatoscaproperty_constraints.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:10.333587 policy-db-migrator | 27 | 0360-jpatoscaproperty_metadata.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:10.378416 policy-db-migrator | 28 | 0370-jpatoscarelationshiptype_metadata.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:10.436149 policy-db-migrator | 29 | 0380-jpatoscarelationshiptype_properties.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:10.486326 policy-db-migrator | 30 | 0390-jpatoscarequirement_metadata.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:10.541129 policy-db-migrator | 31 | 0400-jpatoscarequirement_occurrences.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:10.598579 policy-db-migrator | 32 | 0410-jpatoscarequirement_properties.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:10.649722 policy-db-migrator | 33 | 0420-jpatoscaservicetemplate_metadata.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:10.701339 policy-db-migrator | 34 | 0430-jpatoscatopologytemplate_inputs.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:10.747361 policy-db-migrator | 35 | 0440-pdpgroup_pdpsubgroup.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:10.799063 policy-db-migrator | 36 | 0450-pdpgroup.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:10.848703 policy-db-migrator | 37 | 0460-pdppolicystatus.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:10.901947 policy-db-migrator | 38 | 0470-pdp.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:10.962779 policy-db-migrator | 39 | 0480-pdpstatistics.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:11.014313 policy-db-migrator | 40 | 0490-pdpsubgroup_pdp.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:11.06846 policy-db-migrator | 41 | 0500-pdpsubgroup.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:11.119126 policy-db-migrator | 42 | 0510-toscacapabilityassignment.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:11.176474 policy-db-migrator | 43 | 0520-toscacapabilityassignments.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:11.227706 policy-db-migrator | 44 | 0530-toscacapabilityassignments_toscacapabilityassignment.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:11.274035 policy-db-migrator | 45 | 0540-toscacapabilitytype.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:11.327654 policy-db-migrator | 46 | 0550-toscacapabilitytypes.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:11.374273 policy-db-migrator | 47 | 0560-toscacapabilitytypes_toscacapabilitytype.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:11.423625 policy-db-migrator | 48 | 0570-toscadatatype.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:11.480979 policy-db-migrator | 49 | 0580-toscadatatypes.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:11.52512 policy-db-migrator | 50 | 0590-toscadatatypes_toscadatatype.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:11.571544 policy-db-migrator | 51 | 0600-toscanodetemplate.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:11.637169 policy-db-migrator | 52 | 0610-toscanodetemplates.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:11.684929 policy-db-migrator | 53 | 0620-toscanodetemplates_toscanodetemplate.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:11.732914 policy-db-migrator | 54 | 0630-toscanodetype.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:11.786264 policy-db-migrator | 55 | 0640-toscanodetypes.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:11.845483 policy-db-migrator | 56 | 0650-toscanodetypes_toscanodetype.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:11.892327 policy-db-migrator | 57 | 0660-toscaparameter.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:11.941377 policy-db-migrator | 58 | 0670-toscapolicies.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:11.99482 policy-db-migrator | 59 | 0680-toscapolicies_toscapolicy.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:12.045188 policy-db-migrator | 60 | 0690-toscapolicy.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:12.099579 policy-db-migrator | 61 | 0700-toscapolicytype.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:12.153401 policy-db-migrator | 62 | 0710-toscapolicytypes.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:12.204468 policy-db-migrator | 63 | 0720-toscapolicytypes_toscapolicytype.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:12.261882 policy-db-migrator | 64 | 0730-toscaproperty.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:12.321915 policy-db-migrator | 65 | 0740-toscarelationshiptype.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:12.376349 policy-db-migrator | 66 | 0750-toscarelationshiptypes.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:12.438099 policy-db-migrator | 67 | 0760-toscarelationshiptypes_toscarelationshiptype.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:12.492142 policy-db-migrator | 68 | 0770-toscarequirement.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:12.55286 policy-db-migrator | 69 | 0780-toscarequirements.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:12.607672 policy-db-migrator | 70 | 0790-toscarequirements_toscarequirement.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:12.658354 policy-db-migrator | 71 | 0800-toscaservicetemplate.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:12.701327 policy-db-migrator | 72 | 0810-toscatopologytemplate.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:12.754538 policy-db-migrator | 73 | 0820-toscatrigger.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:12.805006 policy-db-migrator | 74 | 0830-FK_ToscaNodeTemplate_capabilitiesName.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:12.856055 policy-db-migrator | 75 | 0840-FK_ToscaNodeTemplate_requirementsName.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:12.906198 policy-db-migrator | 76 | 0850-FK_ToscaNodeType_requirementsName.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:12.955877 policy-db-migrator | 77 | 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:13.007183 policy-db-migrator | 78 | 0870-FK_ToscaServiceTemplate_dataTypesName.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:13.056041 policy-db-migrator | 79 | 0880-FK_ToscaServiceTemplate_nodeTypesName.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:13.105255 policy-db-migrator | 80 | 0890-FK_ToscaServiceTemplate_policyTypesName.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:13.146946 policy-db-migrator | 81 | 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:13.196473 policy-db-migrator | 82 | 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:13.244891 policy-db-migrator | 83 | 0920-FK_ToscaTopologyTemplate_policyName.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:13.300457 policy-db-migrator | 84 | 0940-PdpPolicyStatus_PdpGroup.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:13.34418 policy-db-migrator | 85 | 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:13.388421 policy-db-migrator | 86 | 0960-FK_ToscaNodeTemplate_capabilitiesName.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:13.440419 policy-db-migrator | 87 | 0970-FK_ToscaNodeTemplate_requirementsName.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:13.490397 policy-db-migrator | 88 | 0980-FK_ToscaNodeType_requirementsName.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:13.544046 policy-db-migrator | 89 | 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:13.58692 policy-db-migrator | 90 | 1000-FK_ToscaServiceTemplate_dataTypesName.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:13.628927 policy-db-migrator | 91 | 1010-FK_ToscaServiceTemplate_nodeTypesName.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:13.680093 policy-db-migrator | 92 | 1020-FK_ToscaServiceTemplate_policyTypesName.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:13.727961 policy-db-migrator | 93 | 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:13.777887 policy-db-migrator | 94 | 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:13.834562 policy-db-migrator | 95 | 1050-FK_ToscaTopologyTemplate_policyName.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:13.887045 policy-db-migrator | 96 | 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql | upgrade | 0 | 0800 | 1306251457080800u | 1 | 2025-06-13 14:57:13.940315 policy-db-migrator | 97 | 0100-pdp.sql | upgrade | 0800 | 0900 | 1306251457080900u | 1 | 2025-06-13 14:57:13.988509 policy-db-migrator | 98 | 0110-idx_tsidx1.sql | upgrade | 0800 | 0900 | 1306251457080900u | 1 | 2025-06-13 14:57:14.041173 policy-db-migrator | 99 | 0120-pk_pdpstatistics.sql | upgrade | 0800 | 0900 | 1306251457080900u | 1 | 2025-06-13 14:57:14.092451 policy-db-migrator | 100 | 0130-pdpstatistics.sql | upgrade | 0800 | 0900 | 1306251457080900u | 1 | 2025-06-13 14:57:14.141262 policy-db-migrator | 101 | 0140-pk_pdpstatistics.sql | upgrade | 0800 | 0900 | 1306251457080900u | 1 | 2025-06-13 14:57:14.197703 policy-db-migrator | 102 | 0150-pdpstatistics.sql | upgrade | 0800 | 0900 | 1306251457080900u | 1 | 2025-06-13 14:57:14.250598 policy-db-migrator | 103 | 0160-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1306251457080900u | 1 | 2025-06-13 14:57:14.302296 policy-db-migrator | 104 | 0170-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1306251457080900u | 1 | 2025-06-13 14:57:14.35575 policy-db-migrator | 105 | 0180-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1306251457080900u | 1 | 2025-06-13 14:57:14.400643 policy-db-migrator | 106 | 0190-jpapolicyaudit.sql | upgrade | 0800 | 0900 | 1306251457080900u | 1 | 2025-06-13 14:57:14.46264 policy-db-migrator | 107 | 0200-JpaPolicyAuditIndex_timestamp.sql | upgrade | 0800 | 0900 | 1306251457080900u | 1 | 2025-06-13 14:57:14.514971 policy-db-migrator | 108 | 0210-sequence.sql | upgrade | 0800 | 0900 | 1306251457080900u | 1 | 2025-06-13 14:57:14.567779 policy-db-migrator | 109 | 0220-sequence.sql | upgrade | 0800 | 0900 | 1306251457080900u | 1 | 2025-06-13 14:57:14.620688 policy-db-migrator | 110 | 0100-jpatoscapolicy_targets.sql | upgrade | 0900 | 1000 | 1306251457081000u | 1 | 2025-06-13 14:57:14.674625 policy-db-migrator | 111 | 0110-jpatoscapolicytype_targets.sql | upgrade | 0900 | 1000 | 1306251457081000u | 1 | 2025-06-13 14:57:14.726942 policy-db-migrator | 112 | 0120-toscatrigger.sql | upgrade | 0900 | 1000 | 1306251457081000u | 1 | 2025-06-13 14:57:14.779274 policy-db-migrator | 113 | 0130-jpatoscapolicytype_triggers.sql | upgrade | 0900 | 1000 | 1306251457081000u | 1 | 2025-06-13 14:57:14.830901 policy-db-migrator | 114 | 0140-toscaparameter.sql | upgrade | 0900 | 1000 | 1306251457081000u | 1 | 2025-06-13 14:57:14.879353 policy-db-migrator | 115 | 0150-toscaproperty.sql | upgrade | 0900 | 1000 | 1306251457081000u | 1 | 2025-06-13 14:57:14.932732 policy-db-migrator | 116 | 0160-jpapolicyaudit_pk.sql | upgrade | 0900 | 1000 | 1306251457081000u | 1 | 2025-06-13 14:57:14.988623 policy-db-migrator | 117 | 0170-pdpstatistics_pk.sql | upgrade | 0900 | 1000 | 1306251457081000u | 1 | 2025-06-13 14:57:15.043967 policy-db-migrator | 118 | 0180-jpatoscanodetemplate_metadata.sql | upgrade | 0900 | 1000 | 1306251457081000u | 1 | 2025-06-13 14:57:15.089764 policy-db-migrator | 119 | 0100-upgrade.sql | upgrade | 1000 | 1100 | 1306251457081100u | 1 | 2025-06-13 14:57:15.138449 policy-db-migrator | 120 | 0100-jpapolicyaudit_renameuser.sql | upgrade | 1100 | 1200 | 1306251457081200u | 1 | 2025-06-13 14:57:15.186611 policy-db-migrator | 121 | 0110-idx_tsidx1.sql | upgrade | 1100 | 1200 | 1306251457081200u | 1 | 2025-06-13 14:57:15.239946 policy-db-migrator | 122 | 0120-audit_sequence.sql | upgrade | 1100 | 1200 | 1306251457081200u | 1 | 2025-06-13 14:57:15.293506 policy-db-migrator | 123 | 0130-statistics_sequence.sql | upgrade | 1100 | 1200 | 1306251457081200u | 1 | 2025-06-13 14:57:15.352322 policy-db-migrator | 124 | 0100-pdpstatistics.sql | upgrade | 1200 | 1300 | 1306251457081300u | 1 | 2025-06-13 14:57:15.402059 policy-db-migrator | 125 | 0110-jpapdpstatistics_enginestats.sql | upgrade | 1200 | 1300 | 1306251457081300u | 1 | 2025-06-13 14:57:15.448371 policy-db-migrator | 126 | 0120-statistics_sequence.sql | upgrade | 1200 | 1300 | 1306251457081300u | 1 | 2025-06-13 14:57:15.500931 policy-db-migrator | (126 rows) policy-db-migrator | policy-db-migrator | policyadmin: OK @ 1300 policy-db-migrator | Initializing clampacm... policy-db-migrator | 97 blocks policy-db-migrator | Preparing upgrade release version: 1400 policy-db-migrator | Preparing upgrade release version: 1500 policy-db-migrator | Preparing upgrade release version: 1600 policy-db-migrator | Preparing upgrade release version: 1601 policy-db-migrator | Preparing upgrade release version: 1700 policy-db-migrator | Preparing upgrade release version: 1701 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | ----------+--------- policy-db-migrator | clampacm | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | clampacm: upgrade available: 0 -> 1701 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | NOTICE: relation "clampacm_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | upgrade: 0 -> 1701 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositiondefinition.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-automationcompositionelement.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-nodetemplatestate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-participant.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-participantsupportedelements.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-ac_compositionId_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-ac_element_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0900-dt_element_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1000-supported_element_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1100-automationcompositionelement_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1200-nodetemplate_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1300-participantsupportedelements_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | ALTER TABLE policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositiondefinition.sql policy-db-migrator | ALTER TABLE policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-participantreplica.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-participant.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-participant_replica_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-participant_replica_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-automationcompositionelement.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-nodetemplatestate.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositionelement.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositionelement.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-message.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-messagejob.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-messagejob_identificationId_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcompositionrollback.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcomposition.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-automationcompositionelement.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-automationcomposition_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-automationcompositiondefinition.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-nodetemplatestate.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-mb_identificationId_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-participantreplica.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0900-participantsupportedacelements.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | clampacm: OK: upgrade (1701) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "clampacm_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | name | version policy-db-migrator | ----------+--------- policy-db-migrator | clampacm | 1701 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------------------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-automationcomposition.sql | upgrade | 1300 | 1400 | 1306251457161400u | 1 | 2025-06-13 14:57:16.144353 policy-db-migrator | 2 | 0200-automationcompositiondefinition.sql | upgrade | 1300 | 1400 | 1306251457161400u | 1 | 2025-06-13 14:57:16.207308 policy-db-migrator | 3 | 0300-automationcompositionelement.sql | upgrade | 1300 | 1400 | 1306251457161400u | 1 | 2025-06-13 14:57:16.269557 policy-db-migrator | 4 | 0400-nodetemplatestate.sql | upgrade | 1300 | 1400 | 1306251457161400u | 1 | 2025-06-13 14:57:16.329222 policy-db-migrator | 5 | 0500-participant.sql | upgrade | 1300 | 1400 | 1306251457161400u | 1 | 2025-06-13 14:57:16.388047 policy-db-migrator | 6 | 0600-participantsupportedelements.sql | upgrade | 1300 | 1400 | 1306251457161400u | 1 | 2025-06-13 14:57:16.446819 policy-db-migrator | 7 | 0700-ac_compositionId_index.sql | upgrade | 1300 | 1400 | 1306251457161400u | 1 | 2025-06-13 14:57:16.504179 policy-db-migrator | 8 | 0800-ac_element_fk_index.sql | upgrade | 1300 | 1400 | 1306251457161400u | 1 | 2025-06-13 14:57:16.555048 policy-db-migrator | 9 | 0900-dt_element_fk_index.sql | upgrade | 1300 | 1400 | 1306251457161400u | 1 | 2025-06-13 14:57:16.607619 policy-db-migrator | 10 | 1000-supported_element_fk_index.sql | upgrade | 1300 | 1400 | 1306251457161400u | 1 | 2025-06-13 14:57:16.658272 policy-db-migrator | 11 | 1100-automationcompositionelement_fk.sql | upgrade | 1300 | 1400 | 1306251457161400u | 1 | 2025-06-13 14:57:16.712624 policy-db-migrator | 12 | 1200-nodetemplate_fk.sql | upgrade | 1300 | 1400 | 1306251457161400u | 1 | 2025-06-13 14:57:16.766913 policy-db-migrator | 13 | 1300-participantsupportedelements_fk.sql | upgrade | 1300 | 1400 | 1306251457161400u | 1 | 2025-06-13 14:57:16.816854 policy-db-migrator | 14 | 0100-automationcomposition.sql | upgrade | 1400 | 1500 | 1306251457161500u | 1 | 2025-06-13 14:57:16.8671 policy-db-migrator | 15 | 0200-automationcompositiondefinition.sql | upgrade | 1400 | 1500 | 1306251457161500u | 1 | 2025-06-13 14:57:16.916212 policy-db-migrator | 16 | 0300-participantreplica.sql | upgrade | 1400 | 1500 | 1306251457161500u | 1 | 2025-06-13 14:57:16.977657 policy-db-migrator | 17 | 0400-participant.sql | upgrade | 1400 | 1500 | 1306251457161500u | 1 | 2025-06-13 14:57:17.023222 policy-db-migrator | 18 | 0500-participant_replica_fk_index.sql | upgrade | 1400 | 1500 | 1306251457161500u | 1 | 2025-06-13 14:57:17.071462 policy-db-migrator | 19 | 0600-participant_replica_fk.sql | upgrade | 1400 | 1500 | 1306251457161500u | 1 | 2025-06-13 14:57:17.123061 policy-db-migrator | 20 | 0700-automationcompositionelement.sql | upgrade | 1400 | 1500 | 1306251457161500u | 1 | 2025-06-13 14:57:17.174035 policy-db-migrator | 21 | 0800-nodetemplatestate.sql | upgrade | 1400 | 1500 | 1306251457161500u | 1 | 2025-06-13 14:57:17.233722 policy-db-migrator | 22 | 0100-automationcomposition.sql | upgrade | 1500 | 1600 | 1306251457161600u | 1 | 2025-06-13 14:57:17.283303 policy-db-migrator | 23 | 0200-automationcompositionelement.sql | upgrade | 1500 | 1600 | 1306251457161600u | 1 | 2025-06-13 14:57:17.33362 policy-db-migrator | 24 | 0100-automationcomposition.sql | upgrade | 1501 | 1601 | 1306251457161601u | 1 | 2025-06-13 14:57:17.379997 policy-db-migrator | 25 | 0200-automationcompositionelement.sql | upgrade | 1501 | 1601 | 1306251457161601u | 1 | 2025-06-13 14:57:17.430947 policy-db-migrator | 26 | 0100-message.sql | upgrade | 1600 | 1700 | 1306251457161700u | 1 | 2025-06-13 14:57:17.481122 policy-db-migrator | 27 | 0200-messagejob.sql | upgrade | 1600 | 1700 | 1306251457161700u | 1 | 2025-06-13 14:57:17.535087 policy-db-migrator | 28 | 0300-messagejob_identificationId_index.sql | upgrade | 1600 | 1700 | 1306251457161700u | 1 | 2025-06-13 14:57:17.590337 policy-db-migrator | 29 | 0100-automationcompositionrollback.sql | upgrade | 1601 | 1701 | 1306251457161701u | 1 | 2025-06-13 14:57:17.649727 policy-db-migrator | 30 | 0200-automationcomposition.sql | upgrade | 1601 | 1701 | 1306251457161701u | 1 | 2025-06-13 14:57:17.702271 policy-db-migrator | 31 | 0300-automationcompositionelement.sql | upgrade | 1601 | 1701 | 1306251457161701u | 1 | 2025-06-13 14:57:17.752903 policy-db-migrator | 32 | 0400-automationcomposition_fk.sql | upgrade | 1601 | 1701 | 1306251457161701u | 1 | 2025-06-13 14:57:17.80524 policy-db-migrator | 33 | 0500-automationcompositiondefinition.sql | upgrade | 1601 | 1701 | 1306251457161701u | 1 | 2025-06-13 14:57:17.85799 policy-db-migrator | 34 | 0600-nodetemplatestate.sql | upgrade | 1601 | 1701 | 1306251457161701u | 1 | 2025-06-13 14:57:17.9122 policy-db-migrator | 35 | 0700-mb_identificationId_index.sql | upgrade | 1601 | 1701 | 1306251457161701u | 1 | 2025-06-13 14:57:17.966168 policy-db-migrator | 36 | 0800-participantreplica.sql | upgrade | 1601 | 1701 | 1306251457161701u | 1 | 2025-06-13 14:57:18.021413 policy-db-migrator | 37 | 0900-participantsupportedacelements.sql | upgrade | 1601 | 1701 | 1306251457161701u | 1 | 2025-06-13 14:57:18.076563 policy-db-migrator | (37 rows) policy-db-migrator | policy-db-migrator | clampacm: OK @ 1701 policy-db-migrator | Initializing pooling... policy-db-migrator | 4 blocks policy-db-migrator | Preparing upgrade release version: 1600 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | ---------+--------- policy-db-migrator | pooling | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | pooling: upgrade available: 0 -> 1600 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "pooling_schema_changelog" already exists, skipping policy-db-migrator | upgrade: 0 -> 1600 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-distributed.locking.sql policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE INDEX policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | pooling: OK: upgrade (1600) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "pooling_schema_changelog" already exists, skipping policy-db-migrator | name | version policy-db-migrator | ---------+--------- policy-db-migrator | pooling | 1600 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-distributed.locking.sql | upgrade | 1500 | 1600 | 1306251457181600u | 1 | 2025-06-13 14:57:18.754855 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | pooling: OK @ 1600 policy-db-migrator | Initializing operationshistory... policy-db-migrator | 6 blocks policy-db-migrator | Preparing upgrade release version: 1600 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | -------------------+--------- policy-db-migrator | operationshistory | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | operationshistory: upgrade available: 0 -> 1600 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "operationshistory_schema_changelog" already exists, skipping policy-db-migrator | upgrade: 0 -> 1600 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-ophistory_id_sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-operationshistory.sql policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE INDEX policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | operationshistory: OK: upgrade (1600) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "operationshistory_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | name | version policy-db-migrator | -------------------+--------- policy-db-migrator | operationshistory | 1600 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-ophistory_id_sequence.sql | upgrade | 1500 | 1600 | 1306251457191600u | 1 | 2025-06-13 14:57:19.392935 policy-db-migrator | 2 | 0110-operationshistory.sql | upgrade | 1500 | 1600 | 1306251457191600u | 1 | 2025-06-13 14:57:19.47046 policy-db-migrator | (2 rows) policy-db-migrator | policy-db-migrator | operationshistory: OK @ 1600 policy-pap | Waiting for api port 6969... policy-pap | api (172.17.0.9:6969) open policy-pap | Waiting for kafka port 9092... policy-pap | kafka (172.17.0.7:9092) open policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json policy-pap | policy-pap | . ____ _ __ _ _ policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / policy-pap | =========|_|==============|___/=/_/_/_/ policy-pap | policy-pap | :: Spring Boot :: (v3.4.6) policy-pap | policy-pap | [2025-06-13T14:57:33.108+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.15 with PID 54 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) policy-pap | [2025-06-13T14:57:33.109+00:00|INFO|PolicyPapApplication|main] The following 1 profile is active: "default" policy-pap | [2025-06-13T14:57:34.472+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-pap | [2025-06-13T14:57:34.557+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 73 ms. Found 7 JPA repository interfaces. policy-pap | [2025-06-13T14:57:35.480+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port 6969 (http) policy-pap | [2025-06-13T14:57:35.493+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-pap | [2025-06-13T14:57:35.495+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-pap | [2025-06-13T14:57:35.495+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.41] policy-pap | [2025-06-13T14:57:35.552+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext policy-pap | [2025-06-13T14:57:35.553+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2388 ms policy-pap | [2025-06-13T14:57:35.993+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-pap | [2025-06-13T14:57:36.066+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.6.16.Final policy-pap | [2025-06-13T14:57:36.107+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-pap | [2025-06-13T14:57:36.554+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-pap | [2025-06-13T14:57:36.600+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-pap | [2025-06-13T14:57:36.833+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection@1d6a22dd policy-pap | [2025-06-13T14:57:36.835+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-pap | [2025-06-13T14:57:36.932+00:00|INFO|pooling|main] HHH10001005: Database info: policy-pap | Database JDBC URL [Connecting through datasource 'HikariDataSource (HikariPool-1)'] policy-pap | Database driver: undefined/unknown policy-pap | Database version: 16.4 policy-pap | Autocommit mode: undefined/unknown policy-pap | Isolation level: undefined/unknown policy-pap | Minimum pool size: undefined/unknown policy-pap | Maximum pool size: undefined/unknown policy-pap | [2025-06-13T14:57:38.917+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-pap | [2025-06-13T14:57:38.922+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-pap | [2025-06-13T14:57:40.145+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-4240e27b-a45a-49d2-9890-b061568ad8c5-1 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = 4240e27b-a45a-49d2-9890-b061568ad8c5 policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-13T14:57:40.198+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-13T14:57:40.346+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-13T14:57:40.346+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-13T14:57:40.346+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749826660343 policy-pap | [2025-06-13T14:57:40.348+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-4240e27b-a45a-49d2-9890-b061568ad8c5-1, groupId=4240e27b-a45a-49d2-9890-b061568ad8c5] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-13T14:57:40.349+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-policy-pap-2 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = policy-pap policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-13T14:57:40.349+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-13T14:57:40.357+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-13T14:57:40.357+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-13T14:57:40.357+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749826660357 policy-pap | [2025-06-13T14:57:40.358+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-13T14:57:40.662+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json policy-pap | [2025-06-13T14:57:40.777+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-pap | [2025-06-13T14:57:40.860+00:00|INFO|InitializeUserDetailsBeanManagerConfigurer$InitializeUserDetailsManagerConfigurer|main] Global AuthenticationManager configured with UserDetailsService bean with name inMemoryUserDetailsManager policy-pap | [2025-06-13T14:57:41.067+00:00|INFO|OptionalValidatorFactoryBean|main] Failed to set up a Bean Validation provider: jakarta.validation.NoProviderFoundException: Unable to create a Configuration, because no Jakarta Validation provider could be found. Add a provider like Hibernate Validator (RI) to your classpath. policy-pap | [2025-06-13T14:57:41.798+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoints beneath base path '' policy-pap | [2025-06-13T14:57:41.918+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-pap | [2025-06-13T14:57:41.935+00:00|INFO|TomcatWebServer|main] Tomcat started on port 6969 (http) with context path '/policy/pap/v1' policy-pap | [2025-06-13T14:57:41.953+00:00|INFO|ServiceManager|main] Policy PAP starting policy-pap | [2025-06-13T14:57:41.953+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry policy-pap | [2025-06-13T14:57:41.954+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters policy-pap | [2025-06-13T14:57:41.955+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener policy-pap | [2025-06-13T14:57:41.955+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher policy-pap | [2025-06-13T14:57:41.955+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher policy-pap | [2025-06-13T14:57:41.955+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher policy-pap | [2025-06-13T14:57:41.957+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=4240e27b-a45a-49d2-9890-b061568ad8c5, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@7bf96c4e policy-pap | [2025-06-13T14:57:41.967+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=4240e27b-a45a-49d2-9890-b061568ad8c5, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-13T14:57:41.968+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-4240e27b-a45a-49d2-9890-b061568ad8c5-3 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = 4240e27b-a45a-49d2-9890-b061568ad8c5 policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-13T14:57:41.969+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-13T14:57:41.976+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-13T14:57:41.976+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-13T14:57:41.976+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749826661976 policy-pap | [2025-06-13T14:57:41.976+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-4240e27b-a45a-49d2-9890-b061568ad8c5-3, groupId=4240e27b-a45a-49d2-9890-b061568ad8c5] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-13T14:57:41.977+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher policy-pap | [2025-06-13T14:57:41.977+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=c419b8c9-7e1c-4796-be51-5785517e2039, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@77d5a3ee policy-pap | [2025-06-13T14:57:41.977+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=c419b8c9-7e1c-4796-be51-5785517e2039, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-13T14:57:41.978+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-policy-pap-4 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = policy-pap policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-13T14:57:41.978+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-13T14:57:41.983+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-13T14:57:41.984+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-13T14:57:41.984+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749826661983 policy-pap | [2025-06-13T14:57:41.984+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-13T14:57:41.984+00:00|INFO|ServiceManager|main] Policy PAP starting topics policy-pap | [2025-06-13T14:57:41.984+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=c419b8c9-7e1c-4796-be51-5785517e2039, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-13T14:57:41.985+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=4240e27b-a45a-49d2-9890-b061568ad8c5, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-13T14:57:41.985+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=3e782148-08f0-4766-997e-e7b24233735f, alive=false, publisher=null]]: starting policy-pap | [2025-06-13T14:57:41.997+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-pap | acks = -1 policy-pap | auto.include.jmx.reporter = true policy-pap | batch.size = 16384 policy-pap | bootstrap.servers = [kafka:9092] policy-pap | buffer.memory = 33554432 policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = producer-1 policy-pap | compression.gzip.level = -1 policy-pap | compression.lz4.level = 9 policy-pap | compression.type = none policy-pap | compression.zstd.level = 3 policy-pap | connections.max.idle.ms = 540000 policy-pap | delivery.timeout.ms = 120000 policy-pap | enable.idempotence = true policy-pap | enable.metrics.push = true policy-pap | interceptor.classes = [] policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | linger.ms = 0 policy-pap | max.block.ms = 60000 policy-pap | max.in.flight.requests.per.connection = 5 policy-pap | max.request.size = 1048576 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.max.idle.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partitioner.adaptive.partitioning.enable = true policy-pap | partitioner.availability.timeout.ms = 0 policy-pap | partitioner.class = null policy-pap | partitioner.ignore.keys = false policy-pap | receive.buffer.bytes = 32768 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retries = 2147483647 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | transaction.timeout.ms = 60000 policy-pap | transactional.id = null policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | policy-pap | [2025-06-13T14:57:41.998+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-13T14:57:42.011+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-pap | [2025-06-13T14:57:42.028+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-13T14:57:42.028+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-13T14:57:42.028+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749826662028 policy-pap | [2025-06-13T14:57:42.028+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=3e782148-08f0-4766-997e-e7b24233735f, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-pap | [2025-06-13T14:57:42.028+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=f5f1e1be-acf6-41fc-b8c1-21882695cc0c, alive=false, publisher=null]]: starting policy-pap | [2025-06-13T14:57:42.029+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-pap | acks = -1 policy-pap | auto.include.jmx.reporter = true policy-pap | batch.size = 16384 policy-pap | bootstrap.servers = [kafka:9092] policy-pap | buffer.memory = 33554432 policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = producer-2 policy-pap | compression.gzip.level = -1 policy-pap | compression.lz4.level = 9 policy-pap | compression.type = none policy-pap | compression.zstd.level = 3 policy-pap | connections.max.idle.ms = 540000 policy-pap | delivery.timeout.ms = 120000 policy-pap | enable.idempotence = true policy-pap | enable.metrics.push = true policy-pap | interceptor.classes = [] policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | linger.ms = 0 policy-pap | max.block.ms = 60000 policy-pap | max.in.flight.requests.per.connection = 5 policy-pap | max.request.size = 1048576 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.max.idle.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partitioner.adaptive.partitioning.enable = true policy-pap | partitioner.availability.timeout.ms = 0 policy-pap | partitioner.class = null policy-pap | partitioner.ignore.keys = false policy-pap | receive.buffer.bytes = 32768 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retries = 2147483647 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | transaction.timeout.ms = 60000 policy-pap | transactional.id = null policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | policy-pap | [2025-06-13T14:57:42.029+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-13T14:57:42.030+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. policy-pap | [2025-06-13T14:57:42.035+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-13T14:57:42.035+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-13T14:57:42.035+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749826662035 policy-pap | [2025-06-13T14:57:42.035+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=f5f1e1be-acf6-41fc-b8c1-21882695cc0c, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-pap | [2025-06-13T14:57:42.035+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator policy-pap | [2025-06-13T14:57:42.035+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher policy-pap | [2025-06-13T14:57:42.036+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher policy-pap | [2025-06-13T14:57:42.037+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers policy-pap | [2025-06-13T14:57:42.038+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers policy-pap | [2025-06-13T14:57:42.039+00:00|INFO|TimerManager|Thread-9] timer manager update started policy-pap | [2025-06-13T14:57:42.039+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock policy-pap | [2025-06-13T14:57:42.039+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests policy-pap | [2025-06-13T14:57:42.039+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer policy-pap | [2025-06-13T14:57:42.040+00:00|INFO|TimerManager|Thread-10] timer manager state-change started policy-pap | [2025-06-13T14:57:42.040+00:00|INFO|ServiceManager|main] Policy PAP started policy-pap | [2025-06-13T14:57:42.041+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 9.705 seconds (process running for 10.271) policy-pap | [2025-06-13T14:57:42.506+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: avZVZcYzSMyVRlkHApEtCg policy-pap | [2025-06-13T14:57:42.509+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4240e27b-a45a-49d2-9890-b061568ad8c5-3, groupId=4240e27b-a45a-49d2-9890-b061568ad8c5] The metadata response from the cluster reported a recoverable issue with correlation id 3 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-pap | [2025-06-13T14:57:42.509+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4240e27b-a45a-49d2-9890-b061568ad8c5-3, groupId=4240e27b-a45a-49d2-9890-b061568ad8c5] Cluster ID: avZVZcYzSMyVRlkHApEtCg policy-pap | [2025-06-13T14:57:42.511+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: avZVZcYzSMyVRlkHApEtCg policy-pap | [2025-06-13T14:57:42.566+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 0 with epoch 0 policy-pap | [2025-06-13T14:57:42.568+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 1 with epoch 0 policy-pap | [2025-06-13T14:57:42.588+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 3 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-13T14:57:42.588+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: avZVZcYzSMyVRlkHApEtCg policy-pap | [2025-06-13T14:57:42.711+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-pap | [2025-06-13T14:57:42.739+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4240e27b-a45a-49d2-9890-b061568ad8c5-3, groupId=4240e27b-a45a-49d2-9890-b061568ad8c5] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-13T14:57:42.927+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 9 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-13T14:57:42.975+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4240e27b-a45a-49d2-9890-b061568ad8c5-3, groupId=4240e27b-a45a-49d2-9890-b061568ad8c5] The metadata response from the cluster reported a recoverable issue with correlation id 9 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-13T14:57:43.301+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 11 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-13T14:57:43.430+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4240e27b-a45a-49d2-9890-b061568ad8c5-3, groupId=4240e27b-a45a-49d2-9890-b061568ad8c5] The metadata response from the cluster reported a recoverable issue with correlation id 11 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-13T14:57:44.035+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-pap | [2025-06-13T14:57:44.059+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-pap | [2025-06-13T14:57:44.086+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-9b696c04-88cd-4397-866e-2eea30bf114e policy-pap | [2025-06-13T14:57:44.086+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-pap | [2025-06-13T14:57:44.389+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4240e27b-a45a-49d2-9890-b061568ad8c5-3, groupId=4240e27b-a45a-49d2-9890-b061568ad8c5] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-pap | [2025-06-13T14:57:44.392+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4240e27b-a45a-49d2-9890-b061568ad8c5-3, groupId=4240e27b-a45a-49d2-9890-b061568ad8c5] (Re-)joining group policy-pap | [2025-06-13T14:57:44.398+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4240e27b-a45a-49d2-9890-b061568ad8c5-3, groupId=4240e27b-a45a-49d2-9890-b061568ad8c5] Request joining group due to: need to re-join with the given member-id: consumer-4240e27b-a45a-49d2-9890-b061568ad8c5-3-70911273-18fe-43c3-a086-9de441da481b policy-pap | [2025-06-13T14:57:44.398+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4240e27b-a45a-49d2-9890-b061568ad8c5-3, groupId=4240e27b-a45a-49d2-9890-b061568ad8c5] (Re-)joining group policy-pap | [2025-06-13T14:57:47.111+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-9b696c04-88cd-4397-866e-2eea30bf114e', protocol='range'} policy-pap | [2025-06-13T14:57:47.122+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-9b696c04-88cd-4397-866e-2eea30bf114e=Assignment(partitions=[policy-pdp-pap-0])} policy-pap | [2025-06-13T14:57:47.149+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-9b696c04-88cd-4397-866e-2eea30bf114e', protocol='range'} policy-pap | [2025-06-13T14:57:47.150+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-pap | [2025-06-13T14:57:47.157+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 policy-pap | [2025-06-13T14:57:47.173+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 policy-pap | [2025-06-13T14:57:47.192+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | [2025-06-13T14:57:47.402+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4240e27b-a45a-49d2-9890-b061568ad8c5-3, groupId=4240e27b-a45a-49d2-9890-b061568ad8c5] Successfully joined group with generation Generation{generationId=1, memberId='consumer-4240e27b-a45a-49d2-9890-b061568ad8c5-3-70911273-18fe-43c3-a086-9de441da481b', protocol='range'} policy-pap | [2025-06-13T14:57:47.403+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4240e27b-a45a-49d2-9890-b061568ad8c5-3, groupId=4240e27b-a45a-49d2-9890-b061568ad8c5] Finished assignment for group at generation 1: {consumer-4240e27b-a45a-49d2-9890-b061568ad8c5-3-70911273-18fe-43c3-a086-9de441da481b=Assignment(partitions=[policy-pdp-pap-0])} policy-pap | [2025-06-13T14:57:47.410+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4240e27b-a45a-49d2-9890-b061568ad8c5-3, groupId=4240e27b-a45a-49d2-9890-b061568ad8c5] Successfully synced group in generation Generation{generationId=1, memberId='consumer-4240e27b-a45a-49d2-9890-b061568ad8c5-3-70911273-18fe-43c3-a086-9de441da481b', protocol='range'} policy-pap | [2025-06-13T14:57:47.410+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4240e27b-a45a-49d2-9890-b061568ad8c5-3, groupId=4240e27b-a45a-49d2-9890-b061568ad8c5] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-pap | [2025-06-13T14:57:47.410+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4240e27b-a45a-49d2-9890-b061568ad8c5-3, groupId=4240e27b-a45a-49d2-9890-b061568ad8c5] Adding newly assigned partitions: policy-pdp-pap-0 policy-pap | [2025-06-13T14:57:47.412+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4240e27b-a45a-49d2-9890-b061568ad8c5-3, groupId=4240e27b-a45a-49d2-9890-b061568ad8c5] Found no committed offset for partition policy-pdp-pap-0 policy-pap | [2025-06-13T14:57:47.414+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4240e27b-a45a-49d2-9890-b061568ad8c5-3, groupId=4240e27b-a45a-49d2-9890-b061568ad8c5] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | [2025-06-13T14:58:03.841+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-heartbeat] ***** OrderedServiceImpl implementers: policy-pap | [] policy-pap | [2025-06-13T14:58:03.841+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"154aba81-ec7c-4a4d-a738-a978c53e911f","timestampMs":1749826683802,"name":"apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a","pdpGroup":"defaultGroup"} policy-pap | [2025-06-13T14:58:03.842+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"154aba81-ec7c-4a4d-a738-a978c53e911f","timestampMs":1749826683802,"name":"apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a","pdpGroup":"defaultGroup"} policy-pap | [2025-06-13T14:58:03.849+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-pap | [2025-06-13T14:58:03.926+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a PdpUpdate starting policy-pap | [2025-06-13T14:58:03.926+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a PdpUpdate starting listener policy-pap | [2025-06-13T14:58:03.926+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a PdpUpdate starting timer policy-pap | [2025-06-13T14:58:03.927+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=c236035c-de19-4264-a109-9aaf7521026a, expireMs=1749826713927] policy-pap | [2025-06-13T14:58:03.928+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a PdpUpdate starting enqueue policy-pap | [2025-06-13T14:58:03.928+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a PdpUpdate started policy-pap | [2025-06-13T14:58:03.928+00:00|INFO|TimerManager|Thread-9] update timer waiting 29999ms Timer [name=c236035c-de19-4264-a109-9aaf7521026a, expireMs=1749826713927] policy-pap | [2025-06-13T14:58:03.935+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-4074f89f-5c63-4e97-bd61-d581468759b3","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"c236035c-de19-4264-a109-9aaf7521026a","timestampMs":1749826683913,"name":"apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-13T14:58:03.967+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-4074f89f-5c63-4e97-bd61-d581468759b3","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"c236035c-de19-4264-a109-9aaf7521026a","timestampMs":1749826683913,"name":"apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-13T14:58:03.968+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-13T14:58:03.971+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-4074f89f-5c63-4e97-bd61-d581468759b3","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"c236035c-de19-4264-a109-9aaf7521026a","timestampMs":1749826683913,"name":"apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-13T14:58:03.971+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-13T14:58:03.999+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"acbb0f56-eaef-4efc-aa97-b69ada0977eb","timestampMs":1749826683987,"name":"apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a","pdpGroup":"defaultGroup"} policy-pap | [2025-06-13T14:58:04.000+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"acbb0f56-eaef-4efc-aa97-b69ada0977eb","timestampMs":1749826683987,"name":"apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a","pdpGroup":"defaultGroup"} policy-pap | [2025-06-13T14:58:04.000+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-pap | [2025-06-13T14:58:04.006+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"c236035c-de19-4264-a109-9aaf7521026a","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"1b5542ae-dccd-4605-b26a-2e3211ab50e5","timestampMs":1749826683989,"name":"apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-13T14:58:04.025+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a PdpUpdate stopping policy-pap | [2025-06-13T14:58:04.026+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a PdpUpdate stopping enqueue policy-pap | [2025-06-13T14:58:04.026+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a PdpUpdate stopping timer policy-pap | [2025-06-13T14:58:04.026+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=c236035c-de19-4264-a109-9aaf7521026a, expireMs=1749826713927] policy-pap | [2025-06-13T14:58:04.026+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a PdpUpdate stopping listener policy-pap | [2025-06-13T14:58:04.026+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a PdpUpdate stopped policy-pap | [2025-06-13T14:58:04.030+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"c236035c-de19-4264-a109-9aaf7521026a","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"1b5542ae-dccd-4605-b26a-2e3211ab50e5","timestampMs":1749826683989,"name":"apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-13T14:58:04.030+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id c236035c-de19-4264-a109-9aaf7521026a policy-pap | [2025-06-13T14:58:04.033+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a PdpUpdate successful policy-pap | [2025-06-13T14:58:04.033+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a start publishing next request policy-pap | [2025-06-13T14:58:04.033+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a PdpStateChange starting policy-pap | [2025-06-13T14:58:04.033+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a PdpStateChange starting listener policy-pap | [2025-06-13T14:58:04.033+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a PdpStateChange starting timer policy-pap | [2025-06-13T14:58:04.033+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=4a2b2ad7-3534-44cc-b7fe-5409f130a16c, expireMs=1749826714033] policy-pap | [2025-06-13T14:58:04.033+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a PdpStateChange starting enqueue policy-pap | [2025-06-13T14:58:04.033+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a PdpStateChange started policy-pap | [2025-06-13T14:58:04.033+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 30000ms Timer [name=4a2b2ad7-3534-44cc-b7fe-5409f130a16c, expireMs=1749826714033] policy-pap | [2025-06-13T14:58:04.034+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-4074f89f-5c63-4e97-bd61-d581468759b3","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"4a2b2ad7-3534-44cc-b7fe-5409f130a16c","timestampMs":1749826683913,"name":"apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-13T14:58:04.043+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-4074f89f-5c63-4e97-bd61-d581468759b3","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"4a2b2ad7-3534-44cc-b7fe-5409f130a16c","timestampMs":1749826683913,"name":"apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-13T14:58:04.043+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE policy-pap | [2025-06-13T14:58:04.054+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"4a2b2ad7-3534-44cc-b7fe-5409f130a16c","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"74aeb02e-dd98-4a50-98d1-95cb4c196af5","timestampMs":1749826684046,"name":"apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-13T14:58:04.054+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 4a2b2ad7-3534-44cc-b7fe-5409f130a16c policy-pap | [2025-06-13T14:58:04.071+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-4074f89f-5c63-4e97-bd61-d581468759b3","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"4a2b2ad7-3534-44cc-b7fe-5409f130a16c","timestampMs":1749826683913,"name":"apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-13T14:58:04.071+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE policy-pap | [2025-06-13T14:58:04.073+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"4a2b2ad7-3534-44cc-b7fe-5409f130a16c","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"74aeb02e-dd98-4a50-98d1-95cb4c196af5","timestampMs":1749826684046,"name":"apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-13T14:58:04.073+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a PdpStateChange stopping policy-pap | [2025-06-13T14:58:04.073+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a PdpStateChange stopping enqueue policy-pap | [2025-06-13T14:58:04.073+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a PdpStateChange stopping timer policy-pap | [2025-06-13T14:58:04.073+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=4a2b2ad7-3534-44cc-b7fe-5409f130a16c, expireMs=1749826714033] policy-pap | [2025-06-13T14:58:04.073+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a PdpStateChange stopping listener policy-pap | [2025-06-13T14:58:04.073+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a PdpStateChange stopped policy-pap | [2025-06-13T14:58:04.073+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a PdpStateChange successful policy-pap | [2025-06-13T14:58:04.073+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a start publishing next request policy-pap | [2025-06-13T14:58:04.073+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a PdpUpdate starting policy-pap | [2025-06-13T14:58:04.074+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a PdpUpdate starting listener policy-pap | [2025-06-13T14:58:04.074+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a PdpUpdate starting timer policy-pap | [2025-06-13T14:58:04.074+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=203dc3d2-5143-4e60-824c-0790dde3f946, expireMs=1749826714074] policy-pap | [2025-06-13T14:58:04.074+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a PdpUpdate starting enqueue policy-pap | [2025-06-13T14:58:04.074+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a PdpUpdate started policy-pap | [2025-06-13T14:58:04.074+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-4074f89f-5c63-4e97-bd61-d581468759b3","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"203dc3d2-5143-4e60-824c-0790dde3f946","timestampMs":1749826684064,"name":"apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-13T14:58:04.083+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-4074f89f-5c63-4e97-bd61-d581468759b3","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"203dc3d2-5143-4e60-824c-0790dde3f946","timestampMs":1749826684064,"name":"apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-13T14:58:04.083+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-13T14:58:04.090+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-4074f89f-5c63-4e97-bd61-d581468759b3","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"203dc3d2-5143-4e60-824c-0790dde3f946","timestampMs":1749826684064,"name":"apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-13T14:58:04.090+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-13T14:58:04.105+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"203dc3d2-5143-4e60-824c-0790dde3f946","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"6e16b126-90fd-4a55-bfdd-e90ef3e7b7f5","timestampMs":1749826684097,"name":"apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-13T14:58:04.105+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"203dc3d2-5143-4e60-824c-0790dde3f946","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"6e16b126-90fd-4a55-bfdd-e90ef3e7b7f5","timestampMs":1749826684097,"name":"apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-13T14:58:04.106+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a PdpUpdate stopping policy-pap | [2025-06-13T14:58:04.106+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a PdpUpdate stopping enqueue policy-pap | [2025-06-13T14:58:04.106+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a PdpUpdate stopping timer policy-pap | [2025-06-13T14:58:04.106+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=203dc3d2-5143-4e60-824c-0790dde3f946, expireMs=1749826714074] policy-pap | [2025-06-13T14:58:04.106+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a PdpUpdate stopping listener policy-pap | [2025-06-13T14:58:04.106+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a PdpUpdate stopped policy-pap | [2025-06-13T14:58:04.106+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 203dc3d2-5143-4e60-824c-0790dde3f946 policy-pap | [2025-06-13T14:58:04.111+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a PdpUpdate successful policy-pap | [2025-06-13T14:58:04.111+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a has no more requests policy-pap | [2025-06-13T14:58:33.928+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=c236035c-de19-4264-a109-9aaf7521026a, expireMs=1749826713927] policy-pap | [2025-06-13T14:58:34.033+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=4a2b2ad7-3534-44cc-b7fe-5409f130a16c, expireMs=1749826714033] policy-pap | [2025-06-13T14:58:41.593+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-3] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-pap | [2025-06-13T14:58:41.593+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Initializing Servlet 'dispatcherServlet' policy-pap | [2025-06-13T14:58:41.596+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Completed initialization in 3 ms policy-pap | [2025-06-13T14:59:36.464+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls policy-pap | [2025-06-13T14:59:36.471+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls policy-pap | [2025-06-13T14:59:36.851+00:00|INFO|SessionData|http-nio-6969-exec-7] unknown group testGroup policy-pap | [2025-06-13T14:59:37.435+00:00|INFO|SessionData|http-nio-6969-exec-7] create cached group testGroup policy-pap | [2025-06-13T14:59:37.435+00:00|INFO|SessionData|http-nio-6969-exec-7] creating DB group testGroup policy-pap | [2025-06-13T14:59:37.924+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup policy-pap | [2025-06-13T14:59:38.197+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] Registering a deploy for policy onap.restart.tca 1.0.0 policy-pap | [2025-06-13T14:59:38.310+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] Registering a deploy for policy operational.apex.decisionMaker 1.0.0 policy-pap | [2025-06-13T14:59:38.310+00:00|INFO|SessionData|http-nio-6969-exec-1] update cached group testGroup policy-pap | [2025-06-13T14:59:38.310+00:00|INFO|SessionData|http-nio-6969-exec-1] updating DB group testGroup policy-pap | [2025-06-13T14:59:38.324+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2025-06-13T14:59:38Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=DEPLOYMENT, timestamp=2025-06-13T14:59:38Z, user=policyadmin)] policy-pap | [2025-06-13T14:59:39.013+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group testGroup policy-pap | [2025-06-13T14:59:39.014+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-5] remove policy onap.restart.tca 1.0.0 from subgroup testGroup pdpTypeA count=0 policy-pap | [2025-06-13T14:59:39.014+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] Registering an undeploy for policy onap.restart.tca 1.0.0 policy-pap | [2025-06-13T14:59:39.014+00:00|INFO|SessionData|http-nio-6969-exec-5] update cached group testGroup policy-pap | [2025-06-13T14:59:39.015+00:00|INFO|SessionData|http-nio-6969-exec-5] updating DB group testGroup policy-pap | [2025-06-13T14:59:39.025+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2025-06-13T14:59:39Z, user=policyadmin)] policy-pap | [2025-06-13T14:59:39.375+00:00|INFO|SessionData|http-nio-6969-exec-6] cache group defaultGroup policy-pap | [2025-06-13T14:59:39.375+00:00|INFO|SessionData|http-nio-6969-exec-6] cache group testGroup policy-pap | [2025-06-13T14:59:39.375+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-6] remove policy operational.apex.decisionMaker 1.0.0 from subgroup testGroup pdpTypeC count=0 policy-pap | [2025-06-13T14:59:39.375+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] Registering an undeploy for policy operational.apex.decisionMaker 1.0.0 policy-pap | [2025-06-13T14:59:39.375+00:00|INFO|SessionData|http-nio-6969-exec-6] update cached group testGroup policy-pap | [2025-06-13T14:59:39.375+00:00|INFO|SessionData|http-nio-6969-exec-6] updating DB group testGroup policy-pap | [2025-06-13T14:59:39.383+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=UNDEPLOYMENT, timestamp=2025-06-13T14:59:39Z, user=policyadmin)] policy-pap | [2025-06-13T14:59:39.898+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup policy-pap | [2025-06-13T14:59:39.902+00:00|INFO|SessionData|http-nio-6969-exec-1] deleting DB group testGroup policy-pap | [2025-06-13T14:59:42.040+00:00|INFO|PdpModifyRequestMap|pool-3-thread-1] check for PDP records older than 360000ms policy-pap | [2025-06-13T15:00:03.998+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","policies":[],"messageName":"PDP_STATUS","requestId":"a3dca398-6852-4742-8276-c0b15eab384a","timestampMs":1749826803987,"name":"apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-13T15:00:04.000+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-pap | [2025-06-13T15:00:04.000+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","policies":[],"messageName":"PDP_STATUS","requestId":"a3dca398-6852-4742-8276-c0b15eab384a","timestampMs":1749826803987,"name":"apex-cde1f4ac-19de-4b61-8a45-a691fa1b825a","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} postgres | The files belonging to this database system will be owned by user "postgres". postgres | This user must also own the server process. postgres | postgres | The database cluster will be initialized with locale "en_US.utf8". postgres | The default database encoding has accordingly been set to "UTF8". postgres | The default text search configuration will be set to "english". postgres | postgres | Data page checksums are disabled. postgres | postgres | fixing permissions on existing directory /var/lib/postgresql/data ... ok postgres | creating subdirectories ... ok postgres | selecting dynamic shared memory implementation ... posix postgres | selecting default max_connections ... 100 postgres | selecting default shared_buffers ... 128MB postgres | selecting default time zone ... Etc/UTC postgres | creating configuration files ... ok postgres | running bootstrap script ... ok postgres | performing post-bootstrap initialization ... ok postgres | initdb: warning: enabling "trust" authentication for local connections postgres | initdb: hint: You can change this by editing pg_hba.conf or using the option -A, or --auth-local and --auth-host, the next time you run initdb. postgres | syncing data to disk ... ok postgres | postgres | postgres | Success. You can now start the database server using: postgres | postgres | pg_ctl -D /var/lib/postgresql/data -l logfile start postgres | postgres | waiting for server to start....2025-06-13 14:57:06.675 UTC [49] LOG: starting PostgreSQL 16.4 (Debian 16.4-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit postgres | 2025-06-13 14:57:06.677 UTC [49] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" postgres | 2025-06-13 14:57:06.685 UTC [52] LOG: database system was shut down at 2025-06-13 14:57:06 UTC postgres | 2025-06-13 14:57:06.690 UTC [49] LOG: database system is ready to accept connections postgres | done postgres | server started postgres | postgres | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db-pg.conf postgres | postgres | /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db-pg.sh postgres | #!/bin/bash -xv postgres | # Copyright (C) 2022, 2024 Nordix Foundation. All rights reserved postgres | # postgres | # Licensed under the Apache License, Version 2.0 (the "License"); postgres | # you may not use this file except in compliance with the License. postgres | # You may obtain a copy of the License at postgres | # postgres | # http://www.apache.org/licenses/LICENSE-2.0 postgres | # postgres | # Unless required by applicable law or agreed to in writing, software postgres | # distributed under the License is distributed on an "AS IS" BASIS, postgres | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. postgres | # See the License for the specific language governing permissions and postgres | # limitations under the License. postgres | postgres | psql -U postgres -d postgres --command "CREATE USER ${PGSQL_USER} WITH PASSWORD '${PGSQL_PASSWORD}';" postgres | + psql -U postgres -d postgres --command 'CREATE USER policy_user WITH PASSWORD '\''policy_user'\'';' postgres | CREATE ROLE postgres | postgres | for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | do postgres | psql -U postgres -d postgres --command "CREATE DATABASE ${db};" postgres | psql -U postgres -d postgres --command "ALTER DATABASE ${db} OWNER TO ${PGSQL_USER} ;" postgres | psql -U postgres -d postgres --command "GRANT ALL PRIVILEGES ON DATABASE ${db} TO ${PGSQL_USER} ;" postgres | done postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE migration;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE migration OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE migration TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE pooling;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE pooling OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE pooling TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE policyadmin;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE policyadmin OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE policyadmin TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE policyclamp;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE policyclamp OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE policyclamp TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE operationshistory;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE operationshistory OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE operationshistory TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE clampacm;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE clampacm OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE clampacm TO policy_user ;' postgres | GRANT postgres | postgres | 2025-06-13 14:57:07.875 UTC [49] LOG: received fast shutdown request postgres | waiting for server to shut down....2025-06-13 14:57:07.876 UTC [49] LOG: aborting any active transactions postgres | 2025-06-13 14:57:07.884 UTC [49] LOG: background worker "logical replication launcher" (PID 55) exited with exit code 1 postgres | 2025-06-13 14:57:07.885 UTC [50] LOG: shutting down postgres | 2025-06-13 14:57:07.886 UTC [50] LOG: checkpoint starting: shutdown immediate postgres | 2025-06-13 14:57:08.374 UTC [50] LOG: checkpoint complete: wrote 5511 buffers (33.6%); 0 WAL file(s) added, 0 removed, 1 recycled; write=0.341 s, sync=0.141 s, total=0.490 s; sync files=1788, longest=0.008 s, average=0.001 s; distance=25535 kB, estimate=25535 kB; lsn=0/2DDA218, redo lsn=0/2DDA218 postgres | 2025-06-13 14:57:08.386 UTC [49] LOG: database system is shut down postgres | done postgres | server stopped postgres | postgres | PostgreSQL init process complete; ready for start up. postgres | postgres | 2025-06-13 14:57:08.502 UTC [1] LOG: starting PostgreSQL 16.4 (Debian 16.4-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit postgres | 2025-06-13 14:57:08.503 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 postgres | 2025-06-13 14:57:08.503 UTC [1] LOG: listening on IPv6 address "::", port 5432 postgres | 2025-06-13 14:57:08.506 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" postgres | 2025-06-13 14:57:08.516 UTC [102] LOG: database system was shut down at 2025-06-13 14:57:08 UTC postgres | 2025-06-13 14:57:08.520 UTC [1] LOG: database system is ready to accept connections prometheus | time=2025-06-13T14:57:02.705Z level=INFO source=main.go:674 msg="No time or size retention was set so using the default time retention" duration=15d prometheus | time=2025-06-13T14:57:02.705Z level=INFO source=main.go:725 msg="Starting Prometheus Server" mode=server version="(version=3.4.1, branch=HEAD, revision=aea6503d9bbaad6c5faff3ecf6f1025213356c92)" prometheus | time=2025-06-13T14:57:02.705Z level=INFO source=main.go:730 msg="operational information" build_context="(go=go1.24.3, platform=linux/amd64, user=root@16f976c24db1, date=20250531-10:44:38, tags=netgo,builtinassets,stringlabels)" host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" fd_limits="(soft=1048576, hard=1048576)" vm_limits="(soft=unlimited, hard=unlimited)" prometheus | time=2025-06-13T14:57:02.707Z level=INFO source=main.go:806 msg="Leaving GOMAXPROCS=8: CPU quota undefined" component=automaxprocs prometheus | time=2025-06-13T14:57:02.708Z level=INFO source=web.go:656 msg="Start listening for connections" component=web address=0.0.0.0:9090 prometheus | time=2025-06-13T14:57:02.709Z level=INFO source=main.go:1266 msg="Starting TSDB ..." prometheus | time=2025-06-13T14:57:02.712Z level=INFO source=tls_config.go:347 msg="Listening on" component=web address=[::]:9090 prometheus | time=2025-06-13T14:57:02.712Z level=INFO source=tls_config.go:350 msg="TLS is disabled." component=web http2=false address=[::]:9090 prometheus | time=2025-06-13T14:57:02.716Z level=INFO source=head.go:657 msg="Replaying on-disk memory mappable chunks if any" component=tsdb prometheus | time=2025-06-13T14:57:02.716Z level=INFO source=head.go:744 msg="On-disk memory mappable chunks replay completed" component=tsdb duration=2.26µs prometheus | time=2025-06-13T14:57:02.717Z level=INFO source=head.go:752 msg="Replaying WAL, this may take a while" component=tsdb prometheus | time=2025-06-13T14:57:02.717Z level=INFO source=head.go:825 msg="WAL segment loaded" component=tsdb segment=0 maxSegment=0 duration=436.245µs prometheus | time=2025-06-13T14:57:02.717Z level=INFO source=head.go:862 msg="WAL replay completed" component=tsdb checkpoint_replay_duration=29.75µs wal_replay_duration=461.006µs wbl_replay_duration=170ns chunk_snapshot_load_duration=0s mmap_chunk_replay_duration=2.26µs total_replay_duration=551.067µs prometheus | time=2025-06-13T14:57:02.720Z level=INFO source=main.go:1287 msg="filesystem information" fs_type=EXT4_SUPER_MAGIC prometheus | time=2025-06-13T14:57:02.720Z level=INFO source=main.go:1290 msg="TSDB started" prometheus | time=2025-06-13T14:57:02.720Z level=INFO source=main.go:1475 msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml prometheus | time=2025-06-13T14:57:02.721Z level=INFO source=main.go:1514 msg="updated GOGC" old=100 new=75 prometheus | time=2025-06-13T14:57:02.721Z level=INFO source=main.go:1524 msg="Completed loading of configuration file" db_storage=2.62µs remote_storage=1.73µs web_handler=530ns query_engine=1.47µs scrape=269.323µs scrape_sd=284.114µs notify=251.313µs notify_sd=28.49µs rules=2.79µs tracing=8.93µs filename=/etc/prometheus/prometheus.yml totalDuration=1.489348ms prometheus | time=2025-06-13T14:57:02.721Z level=INFO source=main.go:1251 msg="Server is ready to receive web requests." prometheus | time=2025-06-13T14:57:02.721Z level=INFO source=manager.go:175 msg="Starting rule manager..." component="rule manager" simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json simulator | overriding logback.xml simulator | 2025-06-13 14:57:07,555 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json simulator | 2025-06-13 14:57:07,629 INFO org.onap.policy.models.simulators starting simulator | 2025-06-13 14:57:07,629 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties simulator | 2025-06-13 14:57:07,898 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION simulator | 2025-06-13 14:57:07,900 INFO org.onap.policy.models.simulators starting A&AI simulator simulator | 2025-06-13 14:57:08,118 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-13c3c1e1==org.glassfish.jersey.servlet.ServletContainer@f0de2a7a{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@30f5a68a{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@24d4d7c9{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@f0e995e{STOPPED}}, connector=A&AI simulator@4c37b5b{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-13c3c1e1==org.glassfish.jersey.servlet.ServletContainer@f0de2a7a{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: WAITED-START simulator | 2025-06-13 14:57:08,129 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-13c3c1e1==org.glassfish.jersey.servlet.ServletContainer@f0de2a7a{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@30f5a68a{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@24d4d7c9{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@f0e995e{STOPPED}}, connector=A&AI simulator@4c37b5b{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-13c3c1e1==org.glassfish.jersey.servlet.ServletContainer@f0de2a7a{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: STARTING simulator | 2025-06-13 14:57:08,141 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-13c3c1e1==org.glassfish.jersey.servlet.ServletContainer@f0de2a7a{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@30f5a68a{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@24d4d7c9{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@f0e995e{STOPPED}}, connector=A&AI simulator@4c37b5b{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-13c3c1e1==org.glassfish.jersey.servlet.ServletContainer@f0de2a7a{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: RUN simulator | 2025-06-13 14:57:08,146 INFO jetty-12.0.21; built: 2025-05-09T00:32:00.688Z; git: 1c4719601e31b05b7d68910d2edd980259f1f53c; jvm 17.0.15+6-alpine-r0 simulator | 2025-06-13 14:57:08,236 INFO Session workerName=node0 simulator | 2025-06-13 14:57:08,254 INFO Started oeje10s.ServletContextHandler@24d4d7c9{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@f0e995e{STARTED}} simulator | 2025-06-13 14:57:08,959 INFO Using GSON for REST calls simulator | 2025-06-13 14:57:09,028 INFO Started oeje10s.ServletContextHandler@24d4d7c9{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@f0e995e{STARTED}} simulator | 2025-06-13 14:57:09,037 INFO Started A&AI simulator@4c37b5b{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} simulator | 2025-06-13 14:57:09,040 INFO Started oejs.Server@30f5a68a{STARTING}[12.0.21,sto=0] @2174ms simulator | 2025-06-13 14:57:09,040 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-13c3c1e1==org.glassfish.jersey.servlet.ServletContainer@f0de2a7a{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@30f5a68a{STARTED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@24d4d7c9{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@f0e995e{STARTED}}, connector=A&AI simulator@4c37b5b{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-13c3c1e1==org.glassfish.jersey.servlet.ServletContainer@f0de2a7a{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:,STARTED}})]: pending time is 4092 ms. simulator | 2025-06-13 14:57:09,044 INFO org.onap.policy.models.simulators starting SDNC simulator simulator | 2025-06-13 14:57:09,056 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-3829ac1==org.glassfish.jersey.servlet.ServletContainer@5ea3d315{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@4baf352a{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@1bb1fde8{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@15eebbff{STOPPED}}, connector=SDNC simulator@22d6f11{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3829ac1==org.glassfish.jersey.servlet.ServletContainer@5ea3d315{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: WAITED-START simulator | 2025-06-13 14:57:09,057 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-3829ac1==org.glassfish.jersey.servlet.ServletContainer@5ea3d315{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@4baf352a{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@1bb1fde8{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@15eebbff{STOPPED}}, connector=SDNC simulator@22d6f11{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3829ac1==org.glassfish.jersey.servlet.ServletContainer@5ea3d315{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: STARTING simulator | 2025-06-13 14:57:09,058 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-3829ac1==org.glassfish.jersey.servlet.ServletContainer@5ea3d315{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@4baf352a{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@1bb1fde8{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@15eebbff{STOPPED}}, connector=SDNC simulator@22d6f11{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3829ac1==org.glassfish.jersey.servlet.ServletContainer@5ea3d315{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: RUN simulator | 2025-06-13 14:57:09,059 INFO jetty-12.0.21; built: 2025-05-09T00:32:00.688Z; git: 1c4719601e31b05b7d68910d2edd980259f1f53c; jvm 17.0.15+6-alpine-r0 simulator | 2025-06-13 14:57:09,062 INFO Session workerName=node0 simulator | 2025-06-13 14:57:09,062 INFO Started oeje10s.ServletContextHandler@1bb1fde8{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@15eebbff{STARTED}} simulator | 2025-06-13 14:57:09,136 INFO Using GSON for REST calls simulator | 2025-06-13 14:57:09,160 INFO Started oeje10s.ServletContextHandler@1bb1fde8{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@15eebbff{STARTED}} simulator | 2025-06-13 14:57:09,163 INFO Started SDNC simulator@22d6f11{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} simulator | 2025-06-13 14:57:09,163 INFO Started oejs.Server@4baf352a{STARTING}[12.0.21,sto=0] @2298ms simulator | 2025-06-13 14:57:09,164 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-3829ac1==org.glassfish.jersey.servlet.ServletContainer@5ea3d315{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@4baf352a{STARTED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@1bb1fde8{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@15eebbff{STARTED}}, connector=SDNC simulator@22d6f11{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3829ac1==org.glassfish.jersey.servlet.ServletContainer@5ea3d315{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:,STARTED}})]: pending time is 4895 ms. simulator | 2025-06-13 14:57:09,165 INFO org.onap.policy.models.simulators starting SO simulator simulator | 2025-06-13 14:57:09,169 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-2dbe250d==org.glassfish.jersey.servlet.ServletContainer@79c62ffa{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@553f1d75{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@6e1d8f9e{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@3e34ace1{STOPPED}}, connector=SO simulator@62fe6067{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2dbe250d==org.glassfish.jersey.servlet.ServletContainer@79c62ffa{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: WAITED-START simulator | 2025-06-13 14:57:09,170 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-2dbe250d==org.glassfish.jersey.servlet.ServletContainer@79c62ffa{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@553f1d75{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@6e1d8f9e{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@3e34ace1{STOPPED}}, connector=SO simulator@62fe6067{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2dbe250d==org.glassfish.jersey.servlet.ServletContainer@79c62ffa{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: STARTING simulator | 2025-06-13 14:57:09,177 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-2dbe250d==org.glassfish.jersey.servlet.ServletContainer@79c62ffa{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@553f1d75{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@6e1d8f9e{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@3e34ace1{STOPPED}}, connector=SO simulator@62fe6067{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2dbe250d==org.glassfish.jersey.servlet.ServletContainer@79c62ffa{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: RUN simulator | 2025-06-13 14:57:09,181 INFO jetty-12.0.21; built: 2025-05-09T00:32:00.688Z; git: 1c4719601e31b05b7d68910d2edd980259f1f53c; jvm 17.0.15+6-alpine-r0 simulator | 2025-06-13 14:57:09,205 INFO Session workerName=node0 simulator | 2025-06-13 14:57:09,213 INFO Started oeje10s.ServletContextHandler@6e1d8f9e{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@3e34ace1{STARTED}} simulator | 2025-06-13 14:57:09,336 INFO Using GSON for REST calls simulator | 2025-06-13 14:57:09,356 INFO Started oeje10s.ServletContextHandler@6e1d8f9e{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@3e34ace1{STARTED}} simulator | 2025-06-13 14:57:09,372 INFO Started SO simulator@62fe6067{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} simulator | 2025-06-13 14:57:09,372 INFO Started oejs.Server@553f1d75{STARTING}[12.0.21,sto=0] @2507ms simulator | 2025-06-13 14:57:09,373 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-2dbe250d==org.glassfish.jersey.servlet.ServletContainer@79c62ffa{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@553f1d75{STARTED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@6e1d8f9e{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@3e34ace1{STARTED}}, connector=SO simulator@62fe6067{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2dbe250d==org.glassfish.jersey.servlet.ServletContainer@79c62ffa{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:,STARTED}})]: pending time is 4798 ms. simulator | 2025-06-13 14:57:09,374 INFO org.onap.policy.models.simulators started zookeeper | ===> User zookeeper | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) zookeeper | ===> Configuring ... zookeeper | ===> Running preflight checks ... zookeeper | ===> Check if /var/lib/zookeeper/data is writable ... zookeeper | ===> Check if /var/lib/zookeeper/log is writable ... zookeeper | ===> Launching ... zookeeper | ===> Launching zookeeper ... zookeeper | [2025-06-13 14:57:06,887] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-13 14:57:06,890] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-13 14:57:06,890] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-13 14:57:06,890] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-13 14:57:06,891] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-13 14:57:06,893] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2025-06-13 14:57:06,893] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2025-06-13 14:57:06,893] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2025-06-13 14:57:06,893] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) zookeeper | [2025-06-13 14:57:06,894] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) zookeeper | [2025-06-13 14:57:06,894] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-13 14:57:06,894] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-13 14:57:06,894] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-13 14:57:06,894] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-13 14:57:06,895] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-13 14:57:06,895] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) zookeeper | [2025-06-13 14:57:06,905] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@3bbc39f8 (org.apache.zookeeper.server.ServerMetrics) zookeeper | [2025-06-13 14:57:06,908] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper | [2025-06-13 14:57:06,908] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper | [2025-06-13 14:57:06,910] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2025-06-13 14:57:06,918] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:57:06,918] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:57:06,918] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:57:06,918] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:57:06,918] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:57:06,918] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:57:06,918] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:57:06,918] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:57:06,918] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:57:06,918] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:57:06,919] INFO Server environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:57:06,919] INFO Server environment:host.name=zookeeper (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:57:06,920] INFO Server environment:java.version=17.0.14 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:57:06,920] INFO Server environment:java.vendor=Eclipse Adoptium (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:57:06,920] INFO Server environment:java.home=/usr/lib/jvm/temurin-17-jre (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:57:06,920] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/kafka-streams-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-transaction-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/connect-transforms-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-clients-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/connect-mirror-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.16.2.jar:/usr/bin/../share/java/kafka/scala-library-2.13.15.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.118.Final.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.16.2.jar:/usr/bin/../share/java/kafka/connect-runtime-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-afterburner-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/protobuf-java-3.25.5.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/maven-artifact-3.9.6.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/trogdor-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-server-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.15.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/commons-lang3-3.12.0.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.118.Final.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.6-4.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-tools-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jackson-databind-2.16.2.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/opentelemetry-proto-1.0.0-alpha.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-json-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.16.2.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/kafka-raft-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/plexus-utils-3.5.1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/commons-io-2.14.0.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.16.2.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/jackson-core-2.16.2.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.118.Final.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-storage-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.57.v20241219.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:57:06,920] INFO Server environment:java.library.path=/usr/local/lib64:/usr/local/lib::/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:57:06,920] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:57:06,920] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:57:06,920] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:57:06,920] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:57:06,920] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:57:06,920] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:57:06,920] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:57:06,920] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:57:06,920] INFO Server environment:os.memory.free=495MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:57:06,920] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:57:06,920] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:57:06,920] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:57:06,921] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:57:06,921] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:57:06,921] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:57:06,921] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:57:06,921] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:57:06,921] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:57:06,922] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) zookeeper | [2025-06-13 14:57:06,922] INFO minSessionTimeout set to 6000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:57:06,922] INFO maxSessionTimeout set to 60000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:57:06,925] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper | [2025-06-13 14:57:06,925] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper | [2025-06-13 14:57:06,925] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-13 14:57:06,925] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-13 14:57:06,925] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-13 14:57:06,925] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-13 14:57:06,925] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-13 14:57:06,925] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-13 14:57:06,927] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:57:06,927] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:57:06,928] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) zookeeper | [2025-06-13 14:57:06,928] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) zookeeper | [2025-06-13 14:57:06,928] INFO Created server with tickTime 3000 ms minSessionTimeout 6000 ms maxSessionTimeout 60000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:57:06,951] INFO Logging initialized @404ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) zookeeper | [2025-06-13 14:57:07,006] WARN o.e.j.s.ServletContextHandler@6150c3ec{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2025-06-13 14:57:07,006] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2025-06-13 14:57:07,026] INFO jetty-9.4.57.v20241219; built: 2025-01-08T21:24:30.412Z; git: df524e6b29271c2e09ba9aea83c18dc9db464a31; jvm 17.0.14+7 (org.eclipse.jetty.server.Server) zookeeper | [2025-06-13 14:57:07,086] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) zookeeper | [2025-06-13 14:57:07,086] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) zookeeper | [2025-06-13 14:57:07,087] INFO node0 Scavenging every 600000ms (org.eclipse.jetty.server.session) zookeeper | [2025-06-13 14:57:07,090] WARN ServletContext@o.e.j.s.ServletContextHandler@6150c3ec{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) zookeeper | [2025-06-13 14:57:07,098] INFO Started o.e.j.s.ServletContextHandler@6150c3ec{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2025-06-13 14:57:07,107] INFO Started ServerConnector@222545dc{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) zookeeper | [2025-06-13 14:57:07,107] INFO Started @565ms (org.eclipse.jetty.server.Server) zookeeper | [2025-06-13 14:57:07,107] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) zookeeper | [2025-06-13 14:57:07,110] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper | [2025-06-13 14:57:07,111] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper | [2025-06-13 14:57:07,112] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper | [2025-06-13 14:57:07,113] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper | [2025-06-13 14:57:07,122] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper | [2025-06-13 14:57:07,122] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper | [2025-06-13 14:57:07,122] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2025-06-13 14:57:07,122] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2025-06-13 14:57:07,126] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) zookeeper | [2025-06-13 14:57:07,126] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2025-06-13 14:57:07,131] INFO Snapshot loaded in 9 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2025-06-13 14:57:07,132] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2025-06-13 14:57:07,136] INFO Snapshot taken in 3 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:57:07,157] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) zookeeper | [2025-06-13 14:57:07,157] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) zookeeper | [2025-06-13 14:57:07,181] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) zookeeper | [2025-06-13 14:57:07,182] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) zookeeper | [2025-06-13 14:57:08,308] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) Tearing down containers... Container policy-csit Stopping Container policy-apex-pdp Stopping Container grafana Stopping Container policy-csit Stopped Container policy-csit Removing Container policy-csit Removed Container grafana Stopped Container grafana Removing Container grafana Removed Container prometheus Stopping Container prometheus Stopped Container prometheus Removing Container prometheus Removed Container policy-apex-pdp Stopped Container policy-apex-pdp Removing Container policy-apex-pdp Removed Container simulator Stopping Container policy-pap Stopping Container simulator Stopped Container simulator Removing Container simulator Removed Container policy-pap Stopped Container policy-pap Removing Container policy-pap Removed Container kafka Stopping Container policy-api Stopping Container kafka Stopped Container kafka Removing Container kafka Removed Container zookeeper Stopping Container zookeeper Stopped Container zookeeper Removing Container zookeeper Removed Container policy-api Stopped Container policy-api Removing Container policy-api Removed Container policy-db-migrator Stopping Container policy-db-migrator Stopped Container policy-db-migrator Removing Container policy-db-migrator Removed Container postgres Stopping Container postgres Stopped Container postgres Removing Container postgres Removed Network compose_default Removing Network compose_default Removed $ ssh-agent -k unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 2112 killed; [ssh-agent] Stopped. Robot results publisher started... INFO: Checking test criticality is deprecated and will be dropped in a future release! -Parsing output xml: Done! -Copying log files to build dir: Done! -Assigning results to build: Done! -Checking thresholds: Done! Done publishing Robot results. [PostBuildScript] - [INFO] Executing post build scripts. [policy-pap-master-project-csit-verify-pap] $ /bin/bash /tmp/jenkins15337197404835127513.sh ---> sysstat.sh [policy-pap-master-project-csit-verify-pap] $ /bin/bash /tmp/jenkins10932595387237787993.sh ---> package-listing.sh ++ facter osfamily ++ tr '[:upper:]' '[:lower:]' + OS_FAMILY=debian + workspace=/w/workspace/policy-pap-master-project-csit-verify-pap + START_PACKAGES=/tmp/packages_start.txt + END_PACKAGES=/tmp/packages_end.txt + DIFF_PACKAGES=/tmp/packages_diff.txt + PACKAGES=/tmp/packages_start.txt + '[' /w/workspace/policy-pap-master-project-csit-verify-pap ']' + PACKAGES=/tmp/packages_end.txt + case "${OS_FAMILY}" in + dpkg -l + grep '^ii' + '[' -f /tmp/packages_start.txt ']' + '[' -f /tmp/packages_end.txt ']' + diff /tmp/packages_start.txt /tmp/packages_end.txt + '[' /w/workspace/policy-pap-master-project-csit-verify-pap ']' + mkdir -p /w/workspace/policy-pap-master-project-csit-verify-pap/archives/ + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-pap-master-project-csit-verify-pap/archives/ [policy-pap-master-project-csit-verify-pap] $ /bin/bash /tmp/jenkins15402961352874217974.sh ---> capture-instance-metadata.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-verify-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-nDki from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-nDki/bin to PATH INFO: Running in OpenStack, capturing instance metadata [policy-pap-master-project-csit-verify-pap] $ /bin/bash /tmp/jenkins2608432653114023639.sh provisioning config files... copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-pap-master-project-csit-verify-pap@tmp/config3124502803820975598tmp Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] Run condition [Regular expression match] preventing perform for step [Provide Configuration files] [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SERVER_ID=logs [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-verify-pap] $ /bin/bash /tmp/jenkins6637883304156092131.sh ---> create-netrc.sh [policy-pap-master-project-csit-verify-pap] $ /bin/bash /tmp/jenkins8965466909645235532.sh ---> python-tools-install.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-verify-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-nDki from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-nDki/bin to PATH [policy-pap-master-project-csit-verify-pap] $ /bin/bash /tmp/jenkins1685081271498985756.sh ---> sudo-logs.sh Archiving 'sudo' log.. [policy-pap-master-project-csit-verify-pap] $ /bin/bash /tmp/jenkins16587307915963434989.sh ---> job-cost.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-verify-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-nDki from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 lf-activate-venv(): INFO: Adding /tmp/venv-nDki/bin to PATH INFO: No Stack... INFO: Retrieving Pricing Info for: v3-standard-8 INFO: Archiving Costs [policy-pap-master-project-csit-verify-pap] $ /bin/bash -l /tmp/jenkins11800743829346605601.sh ---> logs-deploy.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-verify-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-nDki from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-nDki/bin to PATH INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-pap-master-project-csit-verify-pap/816 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt Archives upload complete. INFO: archiving logs to Nexus ---> uname -a: Linux prd-ubuntu1804-docker-8c-8g-20908 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux ---> lscpu: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 8 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC-Rome Processor Stepping: 0 CPU MHz: 2800.000 BogoMIPS: 5600.00 Virtualization: AMD-V Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-7 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities ---> nproc: 8 ---> df -h: Filesystem Size Used Avail Use% Mounted on udev 16G 0 16G 0% /dev tmpfs 3.2G 708K 3.2G 1% /run /dev/vda1 155G 16G 140G 10% / tmpfs 16G 0 16G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/vda15 105M 4.4M 100M 5% /boot/efi tmpfs 3.2G 0 3.2G 0% /run/user/1001 ---> free -m: total used free shared buff/cache available Mem: 32167 881 23268 0 8017 30830 Swap: 1023 0 1023 ---> ip addr: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 link/ether fa:16:3e:ff:56:d1 brd ff:ff:ff:ff:ff:ff inet 10.30.106.254/23 brd 10.30.107.255 scope global dynamic ens3 valid_lft 85960sec preferred_lft 85960sec inet6 fe80::f816:3eff:feff:56d1/64 scope link valid_lft forever preferred_lft forever 3: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:d8:49:2a:1f brd ff:ff:ff:ff:ff:ff inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:d8ff:fe49:2a1f/64 scope link valid_lft forever preferred_lft forever ---> sar -b -r -n DEV: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-20908) 06/13/25 _x86_64_ (8 CPU) 14:54:38 LINUX RESTART (8 CPU) 14:55:02 tps rtps wtps bread/s bwrtn/s 14:56:01 384.36 58.60 325.76 3721.13 89305.68 14:57:01 517.55 20.45 497.10 2379.34 252816.66 14:58:01 286.50 2.77 283.74 341.41 25883.02 14:59:01 167.91 0.20 167.71 32.53 21097.95 15:00:01 54.77 0.22 54.56 12.40 12960.77 15:01:01 19.43 0.10 19.33 19.06 391.67 Average: 238.01 13.60 224.42 1076.97 67014.05 14:55:02 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 14:56:01 30152932 31697888 2786288 8.46 68072 1788912 1389680 4.09 852708 1644284 151144 14:57:01 24473544 31633960 8465676 25.70 151372 7072368 2012916 5.92 1021548 6847636 920 14:58:01 22178188 29646892 10761032 32.67 165828 7366036 8445928 24.85 3234628 6840652 32 14:59:01 21467280 29554664 11471940 34.83 195604 7899656 8820540 25.95 3423296 7302616 85360 15:00:01 21525532 29522832 11413688 34.65 206772 7801740 8832704 25.99 3457252 7209756 244 15:01:01 21981344 29934836 10957876 33.27 206984 7764540 7063880 20.78 3068008 7163896 420 Average: 23629803 30331845 9309417 28.26 165772 6615542 6094275 17.93 2509573 6168140 39687 14:55:02 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 14:56:01 ens3 483.88 336.49 1725.17 81.79 0.00 0.00 0.00 0.00 14:56:01 lo 1.76 1.76 0.20 0.20 0.00 0.00 0.00 0.00 14:56:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:57:01 vethdcd60e3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:57:01 veth0a4211c 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:57:01 veth698ca9e 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:57:01 vethdb1e1d2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:58:01 vethde6c032 91.80 91.55 16.03 18.62 0.00 0.00 0.00 0.00 14:58:01 veth233877e 2.18 2.63 0.29 0.24 0.00 0.00 0.00 0.00 14:58:01 veth0a4211c 0.32 0.68 0.03 0.51 0.00 0.00 0.00 0.00 14:58:01 ens3 2402.05 1387.69 44585.38 165.31 0.00 0.00 0.00 0.00 14:59:01 vethde6c032 0.20 0.25 0.55 0.02 0.00 0.00 0.00 0.00 14:59:01 veth233877e 3.95 5.55 0.68 0.50 0.00 0.00 0.00 0.00 14:59:01 veth0a4211c 0.48 0.52 0.05 1.10 0.00 0.00 0.00 0.00 14:59:01 ens3 239.99 165.22 2197.08 13.25 0.00 0.00 0.00 0.00 15:00:01 vethde6c032 102.97 102.47 13.34 26.40 0.00 0.00 0.00 0.00 15:00:01 veth233877e 3.20 4.72 0.53 0.36 0.00 0.00 0.00 0.00 15:00:01 veth0a4211c 0.53 0.58 0.05 1.37 0.00 0.00 0.00 0.00 15:00:01 vethec45229 2.25 2.03 1.72 1.85 0.00 0.00 0.00 0.00 15:01:01 vethde6c032 0.35 0.53 0.58 0.04 0.00 0.00 0.00 0.00 15:01:01 ens3 2664.54 1572.27 46790.51 195.24 0.00 0.00 0.00 0.00 15:01:01 veth1c3f36d 53.81 39.18 4.90 5.74 0.00 0.00 0.00 0.00 15:01:01 vetha67edcd 4.82 6.95 0.77 0.93 0.00 0.00 0.00 0.00 Average: vethde6c032 32.64 32.56 5.10 7.54 0.00 0.00 0.00 0.00 Average: ens3 440.82 260.31 7807.61 32.38 0.00 0.00 0.00 0.00 Average: veth1c3f36d 8.99 6.55 0.82 0.96 0.00 0.00 0.00 0.00 Average: vetha67edcd 0.80 1.16 0.13 0.15 0.00 0.00 0.00 0.00 ---> sar -P ALL: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-20908) 06/13/25 _x86_64_ (8 CPU) 14:54:38 LINUX RESTART (8 CPU) 14:55:02 CPU %user %nice %system %iowait %steal %idle 14:56:01 all 10.25 0.00 1.10 4.33 0.03 84.29 14:56:01 0 31.10 0.00 2.51 5.01 0.07 61.30 14:56:01 1 18.49 0.00 1.95 4.33 0.03 75.19 14:56:01 2 10.33 0.00 1.20 15.90 0.05 72.51 14:56:01 3 9.06 0.00 0.75 0.48 0.02 89.70 14:56:01 4 4.73 0.00 0.51 0.07 0.02 94.68 14:56:01 5 3.79 0.00 0.65 0.80 0.03 94.73 14:56:01 6 2.16 0.00 0.71 7.69 0.02 89.42 14:56:01 7 2.33 0.00 0.49 0.34 0.02 96.82 14:57:01 all 19.95 0.00 8.53 6.49 0.08 64.95 14:57:01 0 17.43 0.00 8.07 4.24 0.07 70.20 14:57:01 1 20.90 0.00 9.37 1.46 0.07 68.20 14:57:01 2 17.82 0.00 9.90 22.93 0.10 49.25 14:57:01 3 33.29 0.00 8.30 1.28 0.08 57.04 14:57:01 4 17.89 0.00 9.00 2.49 0.10 70.52 14:57:01 5 17.00 0.00 8.33 9.54 0.07 65.06 14:57:01 6 17.63 0.00 8.03 1.73 0.10 72.51 14:57:01 7 17.55 0.00 7.18 8.43 0.07 66.76 14:58:01 all 27.35 0.00 3.59 0.98 0.09 67.99 14:58:01 0 28.73 0.00 3.57 1.29 0.08 66.33 14:58:01 1 28.99 0.00 3.99 0.45 0.08 66.49 14:58:01 2 22.73 0.00 3.14 1.14 0.10 72.90 14:58:01 3 38.47 0.00 4.23 0.17 0.08 57.05 14:58:01 4 27.51 0.00 3.56 1.14 0.08 67.70 14:58:01 5 27.40 0.00 3.72 0.74 0.08 68.06 14:58:01 6 27.92 0.00 3.55 1.64 0.10 66.78 14:58:01 7 17.01 0.00 2.97 1.29 0.08 78.64 14:59:01 all 6.61 0.00 1.59 0.86 0.07 90.88 14:59:01 0 3.54 0.00 1.24 0.07 0.05 95.11 14:59:01 1 10.57 0.00 2.05 0.23 0.07 87.07 14:59:01 2 4.32 0.00 1.41 2.63 0.07 91.57 14:59:01 3 8.02 0.00 1.70 0.99 0.07 89.22 14:59:01 4 8.59 0.00 1.59 1.68 0.05 88.09 14:59:01 5 10.13 0.00 2.14 0.50 0.08 87.14 14:59:01 6 3.57 0.00 1.53 0.64 0.08 94.18 14:59:01 7 4.10 0.00 1.04 0.13 0.05 94.68 15:00:01 all 6.55 0.00 1.29 0.42 0.06 91.68 15:00:01 0 7.18 0.00 1.05 0.02 0.08 91.67 15:00:01 1 5.92 0.00 0.99 0.65 0.05 92.39 15:00:01 2 6.08 0.00 1.42 0.49 0.07 91.95 15:00:01 3 9.29 0.00 1.90 0.07 0.05 88.70 15:00:01 4 4.81 0.00 1.54 0.03 0.07 93.55 15:00:01 5 6.75 0.00 1.34 2.01 0.08 89.81 15:00:01 6 5.61 0.00 1.14 0.05 0.05 93.15 15:00:01 7 6.75 0.00 0.97 0.00 0.07 92.21 15:01:01 all 1.30 0.00 0.46 0.06 0.05 98.14 15:01:01 0 1.37 0.00 0.52 0.27 0.07 97.78 15:01:01 1 1.09 0.00 0.42 0.02 0.03 98.44 15:01:01 2 1.27 0.00 0.48 0.02 0.03 98.20 15:01:01 3 1.72 0.00 0.43 0.12 0.07 97.66 15:01:01 4 1.32 0.00 0.35 0.02 0.05 98.26 15:01:01 5 1.47 0.00 0.45 0.02 0.03 98.03 15:01:01 6 0.82 0.00 0.37 0.02 0.05 98.75 15:01:01 7 1.42 0.00 0.58 0.00 0.03 97.96 Average: all 11.98 0.00 2.75 2.18 0.06 83.03 Average: 0 14.83 0.00 2.82 1.80 0.07 80.48 Average: 1 14.31 0.00 3.13 1.18 0.06 81.33 Average: 2 10.40 0.00 2.91 7.12 0.07 79.50 Average: 3 16.62 0.00 2.88 0.52 0.06 79.92 Average: 4 10.79 0.00 2.75 0.90 0.06 85.49 Average: 5 11.09 0.00 2.76 2.25 0.06 83.83 Average: 6 9.62 0.00 2.55 1.95 0.07 85.82 Average: 7 8.19 0.00 2.20 1.69 0.05 87.86